You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/configuration.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,18 +121,17 @@ API endpoints expect a Bearer JWT. Tokens are issued by `POST /auth/token` (subj
121
121
122
122
## LLM
123
123
124
-
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`. For **Google Gemini** (Google AI Studio), set `provider: gemini`, `model` (e.g. `gemini-2.5-flash`, `gemini-2.5-pro`), and **`gemini_api_key`** or `GEMINI_API_KEY`.
124
+
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`.
125
125
126
126
| Option | Env var | Default | Description |
127
127
|--------|---------|---------|-------------|
128
-
|`llm.provider`|`RADIOSHAQ_LLM__PROVIDER`|`mistral`| One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`, `gemini`. |
129
-
|`llm.model`|`RADIOSHAQ_LLM__MODEL`|`mistral-large-latest`| Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`; for **gemini**: `gemini-2.5-flash`, `gemini-2.5-pro`). |
128
+
|`llm.provider`|`RADIOSHAQ_LLM__PROVIDER`|`mistral`| One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`. |
129
+
|`llm.model`|`RADIOSHAQ_LLM__MODEL`|`mistral-large-latest`| Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`). |
130
130
|`llm.mistral_api_key`|`RADIOSHAQ_LLM__MISTRAL_API_KEY`|`null`| Mistral API key (or set `MISTRAL_API_KEY` if your code reads it). |
131
131
|`llm.openai_api_key`|`RADIOSHAQ_LLM__OPENAI_API_KEY`|`null`| OpenAI API key. |
132
132
|`llm.anthropic_api_key`|`RADIOSHAQ_LLM__ANTHROPIC_API_KEY`|`null`| Anthropic API key. |
133
133
|`llm.custom_api_base`|`RADIOSHAQ_LLM__CUSTOM_API_BASE`|`null`|**Custom provider base URL** (e.g. `http://localhost:11434` for Ollama). Passed to LiteLLM. |
134
134
|`llm.custom_api_key`|`RADIOSHAQ_LLM__CUSTOM_API_KEY`|`null`| Custom provider API key. |
135
-
|`llm.gemini_api_key`|`RADIOSHAQ_LLM__GEMINI_API_KEY`|`null`|**Gemini** API key (Google AI Studio; or set `GEMINI_API_KEY`). |
136
135
|`llm.huggingface_api_key`|`RADIOSHAQ_LLM__HUGGINGFACE_API_KEY`|`null`|**Hugging Face** token for [Inference Providers](https://huggingface.co/docs/inference-providers) (or set `HF_TOKEN`). Token needs "Inference Providers" permission. |
137
136
|`llm.huggingface_api_base`|`RADIOSHAQ_LLM__HUGGINGFACE_API_BASE`|`null`| Optional; default `https://router.huggingface.co/v1` when provider is `huggingface`. |
138
137
|`llm.temperature`|`RADIOSHAQ_LLM__TEMPERATURE`|`0.1`| Sampling temperature (0–2). |
# Set in web-interface/.env or project root .env when developing the React UI.
275
-
# VITE_RADIOSHAQ_API=http://localhost:8000
276
-
# VITE_RADIOSHAQ_TOKEN=
277
-
# VITE_GOOGLE_MAPS_API_KEY= # Optional. Enables Map page, Radio field map, Transcripts "View on map". Restrict key by HTTP referrer in Google Cloud Console.
0 commit comments