Skip to content

Commit 9ab7010

Browse files
committed
adds submission to gemini hackathon
1 parent 4f78956 commit 9ab7010

74 files changed

Lines changed: 4169 additions & 60 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/configuration.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -121,17 +121,18 @@ API endpoints expect a Bearer JWT. Tokens are issued by `POST /auth/token` (subj
121121

122122
## LLM
123123

124-
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`.
124+
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`. For **Google Gemini** (Google AI Studio), set `provider: gemini`, `model` (e.g. `gemini-2.5-flash`, `gemini-2.5-pro`), and **`gemini_api_key`** or `GEMINI_API_KEY`.
125125

126126
| Option | Env var | Default | Description |
127127
|--------|---------|---------|-------------|
128-
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`. |
129-
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`). |
128+
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`, `gemini`. |
129+
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`; for **gemini**: `gemini-2.5-flash`, `gemini-2.5-pro`). |
130130
| `llm.mistral_api_key` | `RADIOSHAQ_LLM__MISTRAL_API_KEY` | `null` | Mistral API key (or set `MISTRAL_API_KEY` if your code reads it). |
131131
| `llm.openai_api_key` | `RADIOSHAQ_LLM__OPENAI_API_KEY` | `null` | OpenAI API key. |
132132
| `llm.anthropic_api_key` | `RADIOSHAQ_LLM__ANTHROPIC_API_KEY` | `null` | Anthropic API key. |
133133
| `llm.custom_api_base` | `RADIOSHAQ_LLM__CUSTOM_API_BASE` | `null` | **Custom provider base URL** (e.g. `http://localhost:11434` for Ollama). Passed to LiteLLM. |
134134
| `llm.custom_api_key` | `RADIOSHAQ_LLM__CUSTOM_API_KEY` | `null` | Custom provider API key. |
135+
| `llm.gemini_api_key` | `RADIOSHAQ_LLM__GEMINI_API_KEY` | `null` | **Gemini** API key (Google AI Studio; or set `GEMINI_API_KEY`). |
135136
| `llm.huggingface_api_key` | `RADIOSHAQ_LLM__HUGGINGFACE_API_KEY` | `null` | **Hugging Face** token for [Inference Providers](https://huggingface.co/docs/inference-providers) (or set `HF_TOKEN`). Token needs "Inference Providers" permission. |
136137
| `llm.huggingface_api_base` | `RADIOSHAQ_LLM__HUGGINGFACE_API_BASE` | `null` | Optional; default `https://router.huggingface.co/v1` when provider is `huggingface`. |
137138
| `llm.temperature` | `RADIOSHAQ_LLM__TEMPERATURE` | `0.1` | Sampling temperature (0–2). |

docs/plan-gemini-api-support.md

Lines changed: 310 additions & 0 deletions
Large diffs are not rendered by default.

docs/reference/.env.example

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,17 +59,19 @@ POSTGRES_PASSWORD=radioshaq
5959
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
6060
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
6161
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
62+
# RADIOSHAQ_LLM__GEMINI_API_KEY= # For provider: gemini (Google AI Studio)
6263
# RADIOSHAQ_LLM__HUGGINGFACE_API_KEY= # For provider: huggingface (Inference Providers)
6364
# RADIOSHAQ_LLM__HUGGINGFACE_API_BASE= # Optional; default https://router.huggingface.co/v1
6465
# RADIOSHAQ_LLM__TEMPERATURE=0.1
6566
# RADIOSHAQ_LLM__MAX_TOKENS=4096
6667
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
6768
# RADIOSHAQ_LLM__MAX_RETRIES=3
6869
# RADIOSHAQ_LLM__RETRY_DELAY_SECONDS=1.0
69-
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN directly
70+
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN / GEMINI_API_KEY directly
7071
# MISTRAL_API_KEY=
7172
# OPENAI_API_KEY=
7273
# HF_TOKEN= # Hugging Face token with "Inference Providers" permission (when provider is huggingface)
74+
# GEMINI_API_KEY=
7375

7476
# -----------------------------------------------------------------------------
7577
# Memory (per-callsign memory, Hindsight, daily summaries)

docs/reference/config.example.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,13 +44,14 @@ jwt:
4444
# LLM (set API key in env or here; prefer env for secrets)
4545
# -----------------------------------------------------------------------------
4646
llm:
47-
provider: mistral # mistral | openai | anthropic | custom
47+
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
4848
model: mistral-large-latest
4949
mistral_api_key: null
5050
openai_api_key: null
5151
anthropic_api_key: null
5252
custom_api_base: null
5353
custom_api_key: null
54+
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
5455
temperature: 0.1
5556
max_tokens: 4096
5657
timeout_seconds: 60.0

radioshaq/.env.example

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ POSTGRES_PASSWORD=radioshaq
5959
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
6060
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
6161
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
62+
# RADIOSHAQ_LLM__GEMINI_API_KEY=
6263
# RADIOSHAQ_LLM__TEMPERATURE=0.1
6364
# RADIOSHAQ_LLM__MAX_TOKENS=4096
6465
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
@@ -71,6 +72,7 @@ POSTGRES_PASSWORD=radioshaq
7172
# MISTRAL_API_KEY=
7273
# OPENAI_API_KEY=
7374
# HF_TOKEN=
75+
# GEMINI_API_KEY=
7476

7577
# -----------------------------------------------------------------------------
7678
# Memory (per-callsign memory, Hindsight, daily summaries)
@@ -265,3 +267,11 @@ POSTGRES_PASSWORD=radioshaq
265267
# RADIOSHAQ_TTS__KOKORO_SPEED=1.0
266268
# ElevenLabs API key (required when provider=elevenlabs)
267269
# ELEVENLABS_API_KEY=
270+
271+
# -----------------------------------------------------------------------------
272+
# Web UI (Vite) – used when running npm run dev or serving built assets
273+
# -----------------------------------------------------------------------------
274+
# Set in web-interface/.env or project root .env when developing the React UI.
275+
# VITE_RADIOSHAQ_API=http://localhost:8000
276+
# VITE_RADIOSHAQ_TOKEN=
277+
# VITE_GOOGLE_MAPS_API_KEY= # Optional. Enables Map page, Radio field map, Transcripts "View on map". Restrict key by HTTP referrer in Google Cloud Console.

radioshaq/config.example.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,13 +44,14 @@ jwt:
4444
# LLM (set API key in env or here; prefer env for secrets)
4545
# -----------------------------------------------------------------------------
4646
llm:
47-
provider: mistral # mistral | openai | anthropic | custom | huggingface
47+
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
4848
model: mistral-large-latest
4949
mistral_api_key: null
5050
openai_api_key: null
5151
anthropic_api_key: null
5252
custom_api_base: null
5353
custom_api_key: null
54+
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
5455
huggingface_api_key: null # For provider: huggingface; or set HF_TOKEN
5556
huggingface_api_base: null # Optional; default https://router.huggingface.co/v1
5657
temperature: 0.1
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Demo environment profiles
2+
3+
Summary of **environment variables** for running the Live HackRF + LLM demo suite. For full WSL/HackRF setup and Option C env, see [scripts/demo/demo-hackrf-full.md](../scripts/demo/demo-hackrf-full.md).
4+
5+
**Live demos use real hardware and real LLM:** HackRF RX/TX and LLM providers are not stubbed in the documented demo flows. Set the env below and attach a HackRF; use `--require-hardware` in the relevant demo scripts to fail fast if SDR TX is not configured.
6+
7+
## Agent and API hooks (how demos drive the system)
8+
9+
- **radio_tx (RadioTransmissionAgent):** Invoked via `POST /radio/send-audio` (multipart WAV) and `POST /radio/send-tts` (JSON body with `message`, optional `frequency_hz`, `mode`). Requires `radio.sdr_tx_enabled=true` and `radio.sdr_tx_backend=hackrf` for HackRF; when not set or no hardware, the TX agent may still run and return success/false (e.g. "Rig manager not configured"). Compliance checks run before TX.
10+
- **radio_rx_audio (RadioAudioReceptionAgent):** No one-off "start monitor" HTTP endpoint. The **voice listener** (server lifespan) starts the agent when `radio.audio_input_enabled` and `radio.voice_listener_enabled` (or `audio_monitoring_enabled`) are true. Demos that need voice RX either run HQ with that config and poll `GET /api/v1/audio/pending` and `GET /transcripts`, or use `POST /messages/from-audio` to simulate inbound voice.
11+
- **radio_rx (RadioReceptionAgent):** Used by the band listener (injection queue consumer) or by tasks submitted via the orchestrator. Demos inject via `POST /inject/message` or `POST /messages/inject-and-store`; the band listener (when enabled) or a process-driven task consumes from the queue.
12+
- **Orchestrator / Judge:** `POST /messages/process` (body: `message` or `text`, optional `channel`, `chat_id`, `sender_id`) runs the REACT loop and routes to agents. Used by run_orchestrator_judge_demo and run_scheduler_demo.
13+
- **WhitelistAgent:** Invoked via `POST /messages/whitelist-request` (JSON or multipart with audio). Orchestrator evaluates and may call the whitelist agent; result in response or completed_tasks.
14+
- **SchedulerAgent:** No direct HTTP endpoint; reached when the orchestrator selects it for a scheduling request (e.g. "Schedule a call for X with Y at Z"). Requires DB with coordination_events for persistence.
15+
16+
## HQ process (`uv run radioshaq run-api`)
17+
18+
- **Mode + JWT:** `RADIOSHAQ_MODE=hq`, `RADIOSHAQ_JWT__SECRET_KEY` (must match receiver `JWT_SECRET`).
19+
- **Receiver uploads:** `RADIOSHAQ_RADIO__RECEIVER_UPLOAD_STORE=true`, `RADIOSHAQ_RADIO__RECEIVER_UPLOAD_INJECT=true`.
20+
- **HackRF SDR TX:** `RADIOSHAQ_RADIO__SDR_TX_ENABLED=true`, `RADIOSHAQ_RADIO__SDR_TX_BACKEND=hackrf`.
21+
- **Message bus consumer:** `RADIOSHAQ_BUS_CONSUMER_ENABLED=1`.
22+
- **LLM:** e.g. `RADIOSHAQ_LLM__PROVIDER=mistral`, `MISTRAL_API_KEY` / `RADIOSHAQ_LLM__MISTRAL_API_KEY`.
23+
- **ASR/TTS:** e.g. `ELEVENLABS_API_KEY`, `RADIOSHAQ_TTS__PROVIDER=elevenlabs`.
24+
- **Voice listener (for voice_rx_audio demos):** `RADIOSHAQ_RADIO__AUDIO_INPUT_ENABLED=true`, `RADIOSHAQ_RADIO__VOICE_LISTENER_ENABLED=true`, `RADIOSHAQ_RADIO__DEFAULT_BAND=2m`.
25+
- **Twilio:** Omit or leave unset for no-Twilio demos; set for Option C with SMS/WhatsApp.
26+
27+
## Remote receiver process (`uv run radioshaq run-receiver`)
28+
29+
- **JWT:** `JWT_SECRET` = same as HQ `RADIOSHAQ_JWT__SECRET_KEY`.
30+
- **Identity:** `STATION_ID=HACKRF-DEMO`.
31+
- **HackRF:** `SDR_TYPE=hackrf`, `HACKRF_INDEX=0`.
32+
- **HQ upload:** `HQ_URL=http://localhost:8000`, `HQ_TOKEN=<from POST /auth/token>`.
33+
- **Demod:** `RECEIVER_MODE=nfm`, `RECEIVER_AUDIO_RATE=48000`.
34+
35+
## Demo scripts
36+
37+
- **Base URL:** Pass `--base-url http://localhost:8000` (or remote). Scripts obtain a JWT via `POST /auth/token` (subject/role/station_id).
38+
- **Extras:** `uv sync --extra hackrf` (receiver + stream), `uv sync --extra voice_tx` (HackRF TX from HQ), `uv sync --extra audio` (ASR) as needed.
39+
40+
## Database
41+
42+
- **Postgres:** `RADIOSHAQ_DATABASE__POSTGRES_URL` or default (e.g. Docker on 5434). Run `uv run radioshaq launch docker` then `cd radioshaq && uv run alembic upgrade head` before demos that use transcripts or registry.

radioshaq/infrastructure/local/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ services:
152152
- HINDSIGHT_API_LLM_PROVIDER=${RADIOSHAQ_LLM__PROVIDER:-${HINDSIGHT_API_LLM_PROVIDER:-openai}}
153153
- HINDSIGHT_API_LLM_MODEL=${RADIOSHAQ_LLM__MODEL:-${HINDSIGHT_API_LLM_MODEL:-gpt-4o-mini}}
154154
# API key: first non-empty of RadioShaq keys, then generic keys
155-
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}
155+
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__GEMINI_API_KEY:-${GEMINI_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}}}
156156
# Custom base URL (e.g. OpenAI-compatible or Mistral endpoint)
157157
- HINDSIGHT_API_LLM_BASE_URL=${RADIOSHAQ_LLM__CUSTOM_API_BASE:-${HINDSIGHT_API_LLM_BASE_URL:-}}
158158
# Same Postgres as RadioShaq (postgres service, db radioshaq; pgvector in postgres/init/02-pgvector.sql)

radioshaq/radioshaq/api/routes/config_routes.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
"anthropic_api_key",
2525
"custom_api_key",
2626
"huggingface_api_key",
27+
"gemini_api_key",
2728
}
2829

2930

radioshaq/radioshaq/api/routes/gis.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -158,10 +158,15 @@ async def get_operators_nearby(
158158
recent_only=recent_hours > 0,
159159
recent_hours=recent_hours,
160160
)
161+
# Ensure each operator has last_seen_at for mapping clients (alias of timestamp)
162+
operators_for_response = [
163+
{**op, "last_seen_at": op.get("last_seen_at") or op.get("timestamp")}
164+
for op in operators
165+
]
161166
return {
162167
"latitude": latitude,
163168
"longitude": longitude,
164169
"radius_meters": radius_meters,
165-
"operators": operators,
166-
"count": len(operators),
170+
"operators": operators_for_response,
171+
"count": len(operators_for_response),
167172
}

0 commit comments

Comments
 (0)