Skip to content

Commit e3d3f5b

Browse files
committed
Merge remote-tracking branch 'origin/Resolve-conflict'
2 parents a98ed4f + c874abb commit e3d3f5b

File tree

1 file changed

+24
-0
lines changed

1 file changed

+24
-0
lines changed

README.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -804,6 +804,10 @@ ollama pull nomic-embed-text
804804
}
805805
```
806806

807+
The built-in `ollama` provider uses Ollama's native `/api/embeddings` endpoint and is the simplest setup when you want to use `nomic-embed-text`.
808+
809+
If you want to use a different Ollama embedding model through its OpenAI-compatible API, use the `custom` provider instead and set `customProvider.baseUrl` to `http://127.0.0.1:11434/v1` so the plugin calls `.../v1/embeddings`.
810+
807811
## 📈 Performance
808812

809813
The plugin is built for speed with a Rust native module (`tree-sitter`, `usearch`, SQLite). In practice, indexing and retrieval remain fast enough for interactive use on medium/large repositories.
@@ -863,6 +867,26 @@ Works with any server that implements the OpenAI `/v1/embeddings` API format (ll
863867
```
864868
Required fields: `baseUrl`, `model`, `dimensions` (positive integer). Optional: `apiKey`, `maxTokens`, `timeoutMs` (default: 30000), `maxBatchSize` (or `max_batch_size`) to cap inputs per `/embeddings` request for servers like text-embeddings-inference. `{env:VAR_NAME}` placeholders are resolved before config validation for fields that are actually used and throw if the referenced environment variable is missing or malformed.
865869

870+
**Custom Ollama models via OpenAI-compatible API**
871+
If you are running Ollama locally and want to use an embedding model other than the built-in `ollama` setup, point the custom provider at Ollama's OpenAI-compatible base URL with the `/v1` suffix:
872+
873+
```json
874+
{
875+
"embeddingProvider": "custom",
876+
"customProvider": {
877+
"baseUrl": "http://127.0.0.1:11434/v1",
878+
"model": "qwen3-embedding:0.6b",
879+
"dimensions": 1024,
880+
"apiKey": "ollama"
881+
}
882+
}
883+
```
884+
885+
Notes:
886+
- The plugin appends `/embeddings`, so `baseUrl` should be `http://127.0.0.1:11434/v1`, not just `http://127.0.0.1:11434`.
887+
- Ollama ignores the API key, but some OpenAI-compatible clients expect one, so a placeholder like `"ollama"` is fine.
888+
- Make sure `dimensions` matches the actual output size of the model you pulled locally.
889+
866890
## ⚠️ Tradeoffs
867891

868892
Be aware of these characteristics:

0 commit comments

Comments
 (0)