Problem
Running @zilliz/claude-context-mcp as a local MCP in tools like Opencode can cause high aggregate CPU usage when multiple workspaces/sessions are open at once.
In my case, each workspace spawned its own claude-context-mcp process, and each process appeared to:
- start background sync automatically on server startup
- perform an initial sync shortly after boot
- periodically rescan all indexed codebases from the shared snapshot
- contend on the shared snapshot file/lock
This results in duplicated work across processes and sustained CPU usage.
What I observed
A local Opencode config used:
"claude-context": {
"type": "local",
"command": [
"nix", "shell", "github:nixos/nixpkgs/nixos-25.05#nodejs",
"-c", "npx", "-y", "@zilliz/claude-context-mcp@latest"
]
}
With several workspaces open, I saw many claude-context-mcp processes active at the same time. They all shared the same snapshot state under ~/.context/mcp-codebase-snapshot.json, so each process attempted to sync the same indexed repos.
The CPU issue seems to come from a combination of:
startBackgroundSync() being enabled unconditionally on startup
handleSyncIndex() iterating all indexed codebases from shared snapshot state
reindexByChange() / FileSynchronizer.checkForChanges() doing recursive full-file hashing/rescans
- snapshot lock contention, especially because the lock retry path appears to busy-wait
Suggested improvements
1. Make background sync opt-in
Add a config/env flag to disable background sync entirely, for example:
CLAUDE_CONTEXT_BACKGROUND_SYNC=false
CLAUDE_CONTEXT_SYNC_INTERVAL_MS=...
A lot of users likely want explicit indexing/search only, not automatic periodic reindexing.
2. Support a real shared/remote deployment model
Right now the easiest setup is local stdio MCP per client session, which encourages N identical server instances.
It would help to support/document:
- one long-lived shared MCP server
- remote transport for multiple clients
- guidance for hosting as a singleton service instead of per-workspace process
That would avoid duplicated indexing/sync work across sessions.
3. Remove busy-wait lock behavior
If the snapshot lock cannot be acquired, avoid CPU-spinning while waiting. A non-busy retry or proper file lock would help a lot when multiple MCP instances are active.
4. Consider lighter change detection
If possible, make periodic sync cheaper than recursive read-and-hash of every tracked file in every indexed repo for every process.
5. Clarify which deployment model is recommended
The README could explicitly describe:
- local per-session stdio use
- shared single-host daemon use
- remote/service deployment
- tradeoffs for CPU/memory when many clients are open
Why this matters
The current behavior is manageable with one MCP instance, but scales poorly when editor/agent tools open multiple workspaces. Even if each process uses a modest amount of CPU, the aggregate load becomes very noticeable.
Happy to help test a flag or a shared-daemon-oriented setup if that would be useful.
Problem
Running
@zilliz/claude-context-mcpas a local MCP in tools like Opencode can cause high aggregate CPU usage when multiple workspaces/sessions are open at once.In my case, each workspace spawned its own
claude-context-mcpprocess, and each process appeared to:This results in duplicated work across processes and sustained CPU usage.
What I observed
A local Opencode config used:
With several workspaces open, I saw many
claude-context-mcpprocesses active at the same time. They all shared the same snapshot state under~/.context/mcp-codebase-snapshot.json, so each process attempted to sync the same indexed repos.The CPU issue seems to come from a combination of:
startBackgroundSync()being enabled unconditionally on startuphandleSyncIndex()iterating all indexed codebases from shared snapshot statereindexByChange()/FileSynchronizer.checkForChanges()doing recursive full-file hashing/rescansSuggested improvements
1. Make background sync opt-in
Add a config/env flag to disable background sync entirely, for example:
CLAUDE_CONTEXT_BACKGROUND_SYNC=falseCLAUDE_CONTEXT_SYNC_INTERVAL_MS=...A lot of users likely want explicit indexing/search only, not automatic periodic reindexing.
2. Support a real shared/remote deployment model
Right now the easiest setup is local stdio MCP per client session, which encourages N identical server instances.
It would help to support/document:
That would avoid duplicated indexing/sync work across sessions.
3. Remove busy-wait lock behavior
If the snapshot lock cannot be acquired, avoid CPU-spinning while waiting. A non-busy retry or proper file lock would help a lot when multiple MCP instances are active.
4. Consider lighter change detection
If possible, make periodic sync cheaper than recursive read-and-hash of every tracked file in every indexed repo for every process.
5. Clarify which deployment model is recommended
The README could explicitly describe:
Why this matters
The current behavior is manageable with one MCP instance, but scales poorly when editor/agent tools open multiple workspaces. Even if each process uses a modest amount of CPU, the aggregate load becomes very noticeable.
Happy to help test a flag or a shared-daemon-oriented setup if that would be useful.