Common issues and solutions for opencode-codebase-index.
If you're unsure where to start, run this sequence first:
/status(check whether index exists and provider/model look right)index_health_check(clean stale/orphaned index data)/index force(full rebuild when status/health still looks wrong)
Then jump to the relevant section below for provider, build, performance, or branch-specific issues.
- OpenCode Hangs in Home Directory
- No Embedding Provider Available
- Rate Limiting Errors
- Index Corruption / Stale Results
- Embedding Provider Changed
- Native Module Build Failures
- Slow Indexing Performance
- Search Returns No Results
- Branch-Related Issues
Symptoms:
- OpenCode becomes unresponsive when opened in home directory (
~) - New session starts but nothing happens when typing
- High CPU or memory usage
Cause: The plugin's file watcher attempts to watch the entire home directory, which contains hundreds of thousands of files.
Solutions:
The plugin now requires a project marker (.git, package.json, Cargo.toml, etc.) by default. If no marker is found, file watching and auto-indexing are disabled. You'll see this warning:
[codebase-index] Skipping file watching and auto-indexing: no project marker found
Set requireProjectMarker to false in your config:
{
"indexing": {
"requireProjectMarker": false
}
}Warning: Only do this for specific directories you intend to index. Never disable this for your home directory.
The plugin looks for any of these files/directories:
.gitpackage.jsonCargo.tomlgo.modpyproject.tomlsetup.pyrequirements.txtGemfilecomposer.jsonpom.xmlbuild.gradleCMakeLists.txtMakefile.opencode
Error message:
No embedding provider available. Configure GitHub, OpenAI, Google, or Ollama.
Cause: The plugin cannot find any configured embedding provider credentials.
Solutions:
No additional configuration needed. The plugin automatically detects Copilot credentials.
export OPENAI_API_KEY=sk-...Or set in your shell profile (~/.bashrc, ~/.zshrc).
export GOOGLE_API_KEY=...# Install Ollama from https://ollama.ai
# Then pull the embedding model:
ollama pull nomic-embed-text// .opencode/codebase-index.json
{
"embeddingProvider": "ollama"
}Run /status in OpenCode to see which provider is detected.
Error messages:
429 Too Many Requests
Rate limit exceeded
Too many requests
Cause: The embedding provider is rejecting requests due to rate limits.
Solutions:
GitHub Copilot has strict rate limits (~15 requests/minute). The plugin automatically:
- Uses concurrency of 1
- Adds 4-second delays between requests
- Retries with exponential backoff
If still hitting limits:
- Wait 1-2 minutes and retry
- Switch to a different provider for large codebases:
{ "embeddingProvider": "ollama" }
OpenAI has generous limits, but if you hit them:
- Check your OpenAI account tier (free tier has lower limits)
- Consider upgrading to a paid tier
- Use Ollama for initial indexing, then switch back
Similar to OpenAI. Check your quota at Google Cloud Console.
Use Ollama locally - no rate limits:
ollama pull nomic-embed-text{ "embeddingProvider": "ollama" }Symptoms:
- Search returns deleted files
- Results don't match current code
- "Chunk not found" errors
Solutions:
/status
Then ask the agent to run index_health_check to remove orphaned entries.
Ask the agent:
"Force reindex the codebase"
Or run /index force.
Delete the entire index directory:
rm -rf .opencode/index/The next /index will rebuild from scratch.
Error message:
Index incompatible: <reason>. Run index with force=true to rebuild.
Cause: The index was built with a different embedding provider or model than what's currently configured. Embeddings from different providers have different dimensions and are not compatible.
Common scenarios:
- Switched from GitHub Copilot to OpenAI
- Changed Ollama embedding model
- Updated to a new version of the embedding model
Solutions:
Ask the agent:
"Force reindex the codebase"
Or run /index with the force option. This will:
- Delete all existing embeddings
- Re-index all files with the new provider
Different embedding providers produce vectors with different dimensions:
| Provider | Model | Dimensions |
|---|---|---|
| GitHub Copilot | text-embedding-3-small | 1536 |
| OpenAI | text-embedding-3-small | 1536 |
| text-embedding-004 | 768 | |
| Ollama | nomic-embed-text | 768 |
Mixing embeddings from different providers would produce garbage search results, so the plugin refuses to search until you rebuild the index.
Run /status to see what provider/model the index was built with.
Error messages:
Error loading native module
NAPI_RS error
dyld: Library not loaded
Cause: The pre-built native binary for your platform is missing or incompatible.
Solutions:
Pre-built binaries are available for:
- macOS x64 (Intel)
- macOS arm64 (Apple Silicon)
- Linux x64 (glibc)
- Linux arm64 (glibc)
- Windows x64
Requires Rust toolchain:
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Rebuild native module
cd native
cargo build --release
npx napi build --release --platformIf on Alpine Linux or musl-based systems, you need to build from source:
# Install musl target
rustup target add x86_64-unknown-linux-musl
# Build
cd native
cargo build --release --target x86_64-unknown-linux-muslSymptoms:
- Initial indexing takes very long
- Progress seems stuck
Causes and Solutions:
Cloud providers have network latency and rate limits.
Solution: Use Ollama locally:
ollama pull nomic-embed-text{ "embeddingProvider": "ollama" }Files over 1MB are skipped by default, but many medium-sized files can still be slow.
Solution: Increase chunk limits or enable semantic-only mode:
{
"indexing": {
"semanticOnly": true,
"maxChunksPerFile": 50
}
}Copilot has 4-second delays between requests.
Solution: For initial indexing, use a faster provider, then switch back:
{ "embeddingProvider": "openai" }Run /status to see current index stats and estimate remaining work with:
"Estimate indexing cost"
Symptoms:
- Queries return empty results
- "No matches found" for queries that should match
Solutions:
/status
Verify the index exists and has chunks.
Run /index to index the codebase.
Semantic search works best with descriptive queries:
| Bad Query | Better Query |
|---|---|
| "auth" | "authentication middleware that validates JWT tokens" |
| "error" | "error handling for failed API calls" |
| "user" | "function that creates new user accounts" |
Lower the minimum score:
{
"search": {
"minScore": 0.05
}
}Check if your files are being excluded by .gitignore or size limits:
"Run
/indexin verbose mode"
This shows which files were skipped and why.
Cause: The branch catalog may not have updated.
Solution:
- Check current branch detection:
/status - Re-index to update the branch catalog:
/index
Cause: Detached HEAD or unusual git state.
Solution: Check your git state:
git status
cat .git/HEADThe plugin reads .git/HEAD directly. If you're in detached HEAD state, it uses the commit SHA as the "branch" name.
Cause: File watcher may not be running.
Solution: Enable file watching:
{
"indexing": {
"watchFiles": true
}
}Or manually trigger re-index after switching branches:
/index
If none of these solutions work:
- Check logs: Look for error messages in the OpenCode output
- Verbose indexing: Run with verbose mode to see detailed progress
- GitHub Issues: Open an issue with:
- Error message
- OS and Node.js version
- Provider being used
- Steps to reproduce
| Problem | Quick Fix |
|---|---|
| Hangs in home dir | Ensure indexing.requireProjectMarker is true (default) |
| No provider | export OPENAI_API_KEY=... or use Ollama |
| Rate limited | Switch to Ollama for large codebases |
| Stale results | Run index_health_check, then /index force if needed |
| Provider changed | Run /index force to rebuild with current provider/model |
| Slow indexing | Use Ollama locally |
| No results | Run /index first, use descriptive queries |
| Native module error | Rebuild with Rust toolchain |