Skip to content

Commit fb6f8c7

Browse files
committed
Revert "adds submission to gemini hackathon"
This reverts commit 9ab7010.
1 parent 9ab7010 commit fb6f8c7

74 files changed

Lines changed: 60 additions & 4169 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/configuration.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -121,18 +121,17 @@ API endpoints expect a Bearer JWT. Tokens are issued by `POST /auth/token` (subj
121121

122122
## LLM
123123

124-
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`. For **Google Gemini** (Google AI Studio), set `provider: gemini`, `model` (e.g. `gemini-2.5-flash`, `gemini-2.5-pro`), and **`gemini_api_key`** or `GEMINI_API_KEY`.
124+
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`.
125125

126126
| Option | Env var | Default | Description |
127127
|--------|---------|---------|-------------|
128-
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`, `gemini`. |
129-
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`; for **gemini**: `gemini-2.5-flash`, `gemini-2.5-pro`). |
128+
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`. |
129+
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`). |
130130
| `llm.mistral_api_key` | `RADIOSHAQ_LLM__MISTRAL_API_KEY` | `null` | Mistral API key (or set `MISTRAL_API_KEY` if your code reads it). |
131131
| `llm.openai_api_key` | `RADIOSHAQ_LLM__OPENAI_API_KEY` | `null` | OpenAI API key. |
132132
| `llm.anthropic_api_key` | `RADIOSHAQ_LLM__ANTHROPIC_API_KEY` | `null` | Anthropic API key. |
133133
| `llm.custom_api_base` | `RADIOSHAQ_LLM__CUSTOM_API_BASE` | `null` | **Custom provider base URL** (e.g. `http://localhost:11434` for Ollama). Passed to LiteLLM. |
134134
| `llm.custom_api_key` | `RADIOSHAQ_LLM__CUSTOM_API_KEY` | `null` | Custom provider API key. |
135-
| `llm.gemini_api_key` | `RADIOSHAQ_LLM__GEMINI_API_KEY` | `null` | **Gemini** API key (Google AI Studio; or set `GEMINI_API_KEY`). |
136135
| `llm.huggingface_api_key` | `RADIOSHAQ_LLM__HUGGINGFACE_API_KEY` | `null` | **Hugging Face** token for [Inference Providers](https://huggingface.co/docs/inference-providers) (or set `HF_TOKEN`). Token needs "Inference Providers" permission. |
137136
| `llm.huggingface_api_base` | `RADIOSHAQ_LLM__HUGGINGFACE_API_BASE` | `null` | Optional; default `https://router.huggingface.co/v1` when provider is `huggingface`. |
138137
| `llm.temperature` | `RADIOSHAQ_LLM__TEMPERATURE` | `0.1` | Sampling temperature (0–2). |

docs/plan-gemini-api-support.md

Lines changed: 0 additions & 310 deletions
This file was deleted.

docs/reference/.env.example

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,19 +59,17 @@ POSTGRES_PASSWORD=radioshaq
5959
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
6060
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
6161
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
62-
# RADIOSHAQ_LLM__GEMINI_API_KEY= # For provider: gemini (Google AI Studio)
6362
# RADIOSHAQ_LLM__HUGGINGFACE_API_KEY= # For provider: huggingface (Inference Providers)
6463
# RADIOSHAQ_LLM__HUGGINGFACE_API_BASE= # Optional; default https://router.huggingface.co/v1
6564
# RADIOSHAQ_LLM__TEMPERATURE=0.1
6665
# RADIOSHAQ_LLM__MAX_TOKENS=4096
6766
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
6867
# RADIOSHAQ_LLM__MAX_RETRIES=3
6968
# RADIOSHAQ_LLM__RETRY_DELAY_SECONDS=1.0
70-
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN / GEMINI_API_KEY directly
69+
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN directly
7170
# MISTRAL_API_KEY=
7271
# OPENAI_API_KEY=
7372
# HF_TOKEN= # Hugging Face token with "Inference Providers" permission (when provider is huggingface)
74-
# GEMINI_API_KEY=
7573

7674
# -----------------------------------------------------------------------------
7775
# Memory (per-callsign memory, Hindsight, daily summaries)

docs/reference/config.example.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,14 +44,13 @@ jwt:
4444
# LLM (set API key in env or here; prefer env for secrets)
4545
# -----------------------------------------------------------------------------
4646
llm:
47-
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
47+
provider: mistral # mistral | openai | anthropic | custom
4848
model: mistral-large-latest
4949
mistral_api_key: null
5050
openai_api_key: null
5151
anthropic_api_key: null
5252
custom_api_base: null
5353
custom_api_key: null
54-
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
5554
temperature: 0.1
5655
max_tokens: 4096
5756
timeout_seconds: 60.0

radioshaq/.env.example

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,6 @@ POSTGRES_PASSWORD=radioshaq
5959
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
6060
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
6161
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
62-
# RADIOSHAQ_LLM__GEMINI_API_KEY=
6362
# RADIOSHAQ_LLM__TEMPERATURE=0.1
6463
# RADIOSHAQ_LLM__MAX_TOKENS=4096
6564
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
@@ -72,7 +71,6 @@ POSTGRES_PASSWORD=radioshaq
7271
# MISTRAL_API_KEY=
7372
# OPENAI_API_KEY=
7473
# HF_TOKEN=
75-
# GEMINI_API_KEY=
7674

7775
# -----------------------------------------------------------------------------
7876
# Memory (per-callsign memory, Hindsight, daily summaries)
@@ -267,11 +265,3 @@ POSTGRES_PASSWORD=radioshaq
267265
# RADIOSHAQ_TTS__KOKORO_SPEED=1.0
268266
# ElevenLabs API key (required when provider=elevenlabs)
269267
# ELEVENLABS_API_KEY=
270-
271-
# -----------------------------------------------------------------------------
272-
# Web UI (Vite) – used when running npm run dev or serving built assets
273-
# -----------------------------------------------------------------------------
274-
# Set in web-interface/.env or project root .env when developing the React UI.
275-
# VITE_RADIOSHAQ_API=http://localhost:8000
276-
# VITE_RADIOSHAQ_TOKEN=
277-
# VITE_GOOGLE_MAPS_API_KEY= # Optional. Enables Map page, Radio field map, Transcripts "View on map". Restrict key by HTTP referrer in Google Cloud Console.

radioshaq/config.example.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,14 +44,13 @@ jwt:
4444
# LLM (set API key in env or here; prefer env for secrets)
4545
# -----------------------------------------------------------------------------
4646
llm:
47-
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
47+
provider: mistral # mistral | openai | anthropic | custom | huggingface
4848
model: mistral-large-latest
4949
mistral_api_key: null
5050
openai_api_key: null
5151
anthropic_api_key: null
5252
custom_api_base: null
5353
custom_api_key: null
54-
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
5554
huggingface_api_key: null # For provider: huggingface; or set HF_TOKEN
5655
huggingface_api_base: null # Optional; default https://router.huggingface.co/v1
5756
temperature: 0.1

radioshaq/docs/demo-env-profiles.md

Lines changed: 0 additions & 42 deletions
This file was deleted.

radioshaq/infrastructure/local/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ services:
152152
- HINDSIGHT_API_LLM_PROVIDER=${RADIOSHAQ_LLM__PROVIDER:-${HINDSIGHT_API_LLM_PROVIDER:-openai}}
153153
- HINDSIGHT_API_LLM_MODEL=${RADIOSHAQ_LLM__MODEL:-${HINDSIGHT_API_LLM_MODEL:-gpt-4o-mini}}
154154
# API key: first non-empty of RadioShaq keys, then generic keys
155-
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__GEMINI_API_KEY:-${GEMINI_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}}}
155+
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}
156156
# Custom base URL (e.g. OpenAI-compatible or Mistral endpoint)
157157
- HINDSIGHT_API_LLM_BASE_URL=${RADIOSHAQ_LLM__CUSTOM_API_BASE:-${HINDSIGHT_API_LLM_BASE_URL:-}}
158158
# Same Postgres as RadioShaq (postgres service, db radioshaq; pgvector in postgres/init/02-pgvector.sql)

radioshaq/radioshaq/api/routes/config_routes.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,6 @@
2424
"anthropic_api_key",
2525
"custom_api_key",
2626
"huggingface_api_key",
27-
"gemini_api_key",
2827
}
2928

3029

radioshaq/radioshaq/api/routes/gis.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -158,15 +158,10 @@ async def get_operators_nearby(
158158
recent_only=recent_hours > 0,
159159
recent_hours=recent_hours,
160160
)
161-
# Ensure each operator has last_seen_at for mapping clients (alias of timestamp)
162-
operators_for_response = [
163-
{**op, "last_seen_at": op.get("last_seen_at") or op.get("timestamp")}
164-
for op in operators
165-
]
166161
return {
167162
"latitude": latitude,
168163
"longitude": longitude,
169164
"radius_meters": radius_meters,
170-
"operators": operators_for_response,
171-
"count": len(operators_for_response),
165+
"operators": operators,
166+
"count": len(operators),
172167
}

0 commit comments

Comments
 (0)