Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 13 additions & 9 deletions .agents/skills/convex-create-component/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ Create reusable Convex components with clear boundaries and a small app-facing A

Ask the user, then pick one path:

| Goal | Shape | Reference |
|------|-------|-----------|
| Component for this app only | Local | `references/local-components.md` |
| Publish or share across apps | Packaged | `references/packaged-components.md` |
| User explicitly needs local + shared library code | Hybrid | `references/hybrid-components.md` |
| Not sure | Default to local | `references/local-components.md` |
| Goal | Shape | Reference |
| ------------------------------------------------- | ---------------- | ----------------------------------- |
| Component for this app only | Local | `references/local-components.md` |
| Publish or share across apps | Packaged | `references/packaged-components.md` |
| User explicitly needs local + shared library code | Hybrid | `references/hybrid-components.md` |
| Not sure | Default to local | `references/local-components.md` |

Read exactly one reference file before proceeding.

Expand Down Expand Up @@ -111,7 +111,7 @@ export const listUnread = query({
userId: v.string(),
message: v.string(),
read: v.boolean(),
})
}),
),
handler: async (ctx, args) => {
return await ctx.db
Expand Down Expand Up @@ -234,12 +234,16 @@ export const sendNotification = mutation({

```ts
// Bad: parent app table IDs are not valid component validators
args: { userId: v.id("users") }
args: {
userId: v.id("users");
}
```

```ts
// Good: treat parent-owned IDs as strings at the boundary
args: { userId: v.string() }
args: {
userId: v.string();
}
```

### Advanced Patterns
Expand Down
9 changes: 4 additions & 5 deletions .agents/skills/convex-migration-helper/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,13 @@ Unless you are certain, prefer deprecating fields over deleting them. Mark the f
// Before
users: defineTable({
name: v.string(),
})
});

// After - safe, new field is optional
users: defineTable({
name: v.string(),
bio: v.optional(v.string()),
})
});
```

### Adding New Table
Expand All @@ -70,7 +70,7 @@ users: defineTable({
posts: defineTable({
userId: v.id("users"),
title: v.string(),
}).index("by_user", ["userId"])
}).index("by_user", ["userId"]);
```

### Adding Index
Expand All @@ -79,8 +79,7 @@ posts: defineTable({
users: defineTable({
name: v.string(),
email: v.string(),
})
.index("by_email", ["email"])
}).index("by_email", ["email"]);
```

## Breaking Changes: The Deployment Workflow
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Common migration patterns, zero-downtime strategies, and verification techniques
users: defineTable({
name: v.string(),
role: v.optional(v.union(v.literal("user"), v.literal("admin"))),
})
});

// Migration: backfill the field
export const addDefaultRole = migrations.define({
Expand All @@ -25,7 +25,7 @@ export const addDefaultRole = migrations.define({
users: defineTable({
name: v.string(),
role: v.union(v.literal("user"), v.literal("admin")),
})
});
```

## Deleting a Field
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,8 +151,7 @@ Process only matching documents instead of the full table:
```typescript
export const fixEmptyNames = migrations.define({
table: "users",
customRange: (query) =>
query.withIndex("by_name", (q) => q.eq("name", "")),
customRange: (query) => query.withIndex("by_name", (q) => q.eq("name", "")),
migrateOne: () => ({ name: "<unknown>" }),
});
```
Expand Down
18 changes: 9 additions & 9 deletions .agents/skills/convex-performance-audit/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,13 @@ Start with the strongest signal available:

After gathering signals, identify the problem class and read the matching reference file.

| Signal | Reference |
|---|---|
| High bytes or documents read, JS filtering, unnecessary joins | `references/hot-path-rules.md` |
| OCC conflict errors, write contention, mutation retries | `references/occ-conflicts.md` |
| High subscription count, slow UI updates, excessive re-renders | `references/subscription-cost.md` |
| Function timeouts, transaction size errors, large payloads | `references/function-budget.md` |
| General "it's slow" with no specific signal | Start with `references/hot-path-rules.md` |
| Signal | Reference |
| -------------------------------------------------------------- | ----------------------------------------- |
| High bytes or documents read, JS filtering, unnecessary joins | `references/hot-path-rules.md` |
| OCC conflict errors, write contention, mutation retries | `references/occ-conflicts.md` |
| High subscription count, slow UI updates, excessive re-renders | `references/subscription-cost.md` |
| Function timeouts, transaction size errors, large payloads | `references/function-budget.md` |
| General "it's slow" with no specific signal | Start with `references/hot-path-rules.md` |

Multiple problem classes can overlap. Read the most relevant reference first, then check the others if symptoms remain.

Expand Down Expand Up @@ -107,7 +107,7 @@ After finding one problem, inspect both sibling readers and sibling writers for
Examples:

- If one list query switches from full docs to a digest table, inspect the other list queries for that table
- If one mutation needs no-op write protection, inspect the other writers to the same table
- If one mutation isolates a frequently-updated field or splits a hot document, inspect the other writers to the same table
- If one read path needs a migration-safe rollout for an unbackfilled field, inspect sibling reads for the same rollout risk

Do not leave one path fixed and another path on the old pattern unless there is a clear product reason.
Expand All @@ -119,7 +119,7 @@ Confirm all of these:
1. Results are the same as before, no dropped records
2. Eliminated reads or writes are no longer in the path where expected
3. Fallback behavior works when denormalized or indexed fields are missing
4. New writes avoid unnecessary invalidation when data is unchanged
4. Frequently-updated fields are isolated from widely-read documents where needed
5. Every relevant sibling reader and writer was inspected, not just the original function

## Reference Files
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ Convex functions run inside transactions with budgets for time, reads, and write

These are the current values from the [Convex limits docs](https://docs.convex.dev/production/state/limits). Check that page for the latest numbers.

| Resource | Limit |
|---|---|
| Query/mutation execution time | 1 second (user code only, excludes DB operations) |
| Action execution time | 10 minutes |
| Data read per transaction | 16 MiB |
| Data written per transaction | 16 MiB |
| Resource | Limit |
| --------------------------------- | ----------------------------------------------------- |
| Query/mutation execution time | 1 second (user code only, excludes DB operations) |
| Action execution time | 10 minutes |
| Data read per transaction | 16 MiB |
| Data written per transaction | 16 MiB |
| Documents scanned per transaction | 32,000 (includes documents filtered out by `.filter`) |
| Index ranges read per transaction | 4,096 (each `db.get` and `db.query` call) |
| Documents written per transaction | 16,000 |
| Individual document size | 1 MiB |
| Function return value size | 16 MiB |
| Index ranges read per transaction | 4,096 (each `db.get` and `db.query` call) |
| Documents written per transaction | 16,000 |
| Individual document size | 1 MiB |
| Function return value size | 16 MiB |

## Symptoms

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,13 +121,15 @@ Indexes like `by_foo` and `by_foo_and_bar` are usually redundant. You only need
// Bad: two indexes where one would do
defineTable({ team: v.id("teams"), user: v.id("users") })
.index("by_team", ["team"])
.index("by_team_and_user", ["team", "user"])
.index("by_team_and_user", ["team", "user"]);
```

```ts
// Good: single compound index serves both query patterns
defineTable({ team: v.id("teams"), user: v.id("users") })
.index("by_team_and_user", ["team", "user"])
defineTable({ team: v.id("teams"), user: v.id("users") }).index(
"by_team_and_user",
["team", "user"],
);
```

Exception: `.index("by_foo", ["foo"])` is really an index on `foo` + `_creationTime`, while `.index("by_foo_and_bar", ["foo", "bar"])` is on `foo` + `bar` + `_creationTime`. If you need results sorted by `foo` then `_creationTime`, you need the single-field index because the compound one would sort by `bar` first.
Expand Down Expand Up @@ -170,9 +172,7 @@ const ownerName = project.ownerName ?? "Unknown owner";
```ts
// Good: denormalized data is an optimization, not the only source of truth
const ownerName =
project.ownerName ??
(await ctx.db.get(project.ownerId))?.name ??
null;
project.ownerName ?? (await ctx.db.get(project.ownerId))?.name ?? null;
```

Bad lookup map pattern:
Expand Down Expand Up @@ -241,35 +241,33 @@ const projects = await ctx.db
.take(20);
```

## 4. Skip No-Op Writes

No-op writes still cost work in Convex:
## 4. Isolate Frequently-Updated Fields

- invalidation
- replication
- trigger execution
- downstream sync
Convex already no-ops unchanged writes. The invalidation problem here is real writes hitting documents that many queries subscribe to.

Before `patch` or `replace`, compare against the existing document and skip the write if nothing changed.
Move high-churn fields like `lastSeen`, counters, presence, or ephemeral status off widely-read documents when most readers do not need them.

Apply this across sibling writers too. One careful writer does not help much if three other mutations still patch unconditionally.
Apply this across sibling writers too. Splitting one write path does not help much if three other mutations still update the same widely-read document.

```ts
// Bad: patching unchanged values still triggers invalidation and downstream work
await ctx.db.patch(settings._id, {
theme: args.theme,
locale: args.locale,
// Bad: every presence heartbeat invalidates subscribers to the whole profile
await ctx.db.patch(user._id, {
name: args.name,
avatarUrl: args.avatarUrl,
lastSeen: Date.now(),
});
```

```ts
// Good: only write when something actually changed
if (settings.theme !== args.theme || settings.locale !== args.locale) {
await ctx.db.patch(settings._id, {
theme: args.theme,
locale: args.locale,
});
}
// Good: keep profile reads stable, move heartbeat updates to a separate document
await ctx.db.patch(user._id, {
name: args.name,
avatarUrl: args.avatarUrl,
});

await ctx.db.patch(presence._id, {
lastSeen: Date.now(),
});
```

## 5. Match Consistency To Read Patterns
Expand Down
40 changes: 14 additions & 26 deletions .agents/skills/convex-performance-audit/references/occ-conflicts.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,42 +73,30 @@ await ctx.db.patch(shardId, { count: shard!.count + 1 });

Aggregate the shards in a query or scheduled job when you need the total.

### 3. Skip no-op writes
### 3. Move non-critical work to scheduled functions

Writes that do not change data still participate in conflict detection and trigger invalidation.
If a mutation does primary work plus secondary bookkeeping (analytics, non-critical notifications, cache warming), the bookkeeping extends the transaction's lifetime and read/write set.

```ts
// Bad: patches even when nothing changed
await ctx.db.patch(doc._id, { status: args.status });
```

```ts
// Good: only write when the value actually differs
if (doc.status !== args.status) {
await ctx.db.patch(doc._id, { status: args.status });
}
```

### 4. Move non-critical work to scheduled functions

If a mutation does primary work plus secondary bookkeeping (analytics, notifications, cache warming), the bookkeeping extends the transaction's lifetime and read/write set.

```ts
// Bad: analytics update in the same transaction as the user action
await ctx.db.patch(userId, { lastActiveAt: Date.now() });
await ctx.db.insert("analytics", { event: "action", userId, ts: Date.now() });
// Bad: canonical write and derived work happen in the same transaction
await ctx.db.patch(userId, { name: args.name });
await ctx.db.insert("userUpdateAnalytics", {
userId,
kind: "name_changed",
name: args.name,
});
```

```ts
// Good: schedule the bookkeeping so the primary transaction is smaller
await ctx.db.patch(userId, { lastActiveAt: Date.now() });
await ctx.scheduler.runAfter(0, internal.analytics.recordEvent, {
event: "action",
// Good: keep the primary write small, defer the analytics work
await ctx.db.patch(userId, { name: args.name });
await ctx.scheduler.runAfter(0, internal.users.recordNameChangeAnalytics, {
userId,
name: args.name,
});
```

### 5. Combine competing writes
### 4. Combine competing writes

If two mutations must update the same document atomically, consider whether they can be combined into a single mutation call from the client, reducing round trips and conflict windows.

Expand Down
Loading
Loading