|
| 1 | +# Trackio Alerts |
| 2 | + |
| 3 | +Alerts let you flag important training events directly from code. They are the primary mechanism for LLM agents to diagnose runs and iterate autonomously on ML experiments. |
| 4 | + |
| 5 | +Alerts are printed to the terminal, stored in the database, displayed in the dashboard, and optionally sent to webhooks (Slack/Discord). |
| 6 | + |
| 7 | +<img width="2972" height="1694" alt="image" src="https://github.com/user-attachments/assets/02d938f8-51a9-4706-85c4-d95b7645bcf4" /> |
| 8 | + |
| 9 | + |
| 10 | +## Core API |
| 11 | + |
| 12 | +### trackio.alert() |
| 13 | + |
| 14 | +```python |
| 15 | +trackio.alert( |
| 16 | + title="Loss divergence", # Short title (required) |
| 17 | + text="Loss 5.2 still high after 200 steps", # Detailed description (optional) |
| 18 | + level=trackio.AlertLevel.WARN, # INFO, WARN, or ERROR (default: WARN) |
| 19 | + webhook_url="https://hooks.slack.com/...", # Per-alert webhook override (optional) |
| 20 | +) |
| 21 | +``` |
| 22 | + |
| 23 | +### Alert Levels |
| 24 | + |
| 25 | +| Level | Usage | |
| 26 | +|-------|-------| |
| 27 | +| `trackio.AlertLevel.INFO` | Informational milestones (checkpoints saved, eval completed) | |
| 28 | +| `trackio.AlertLevel.WARN` | Potential issues (loss plateau, low accuracy, high gradient norm) | |
| 29 | +| `trackio.AlertLevel.ERROR` | Critical failures (NaN loss, divergence, OOM) | |
| 30 | + |
| 31 | +### Webhook Support |
| 32 | + |
| 33 | +Set a global webhook URL via `trackio.init()` or the `TRACKIO_WEBHOOK_URL` environment variable. Alerts are auto-formatted for Slack and Discord URLs. |
| 34 | + |
| 35 | +```python |
| 36 | +trackio.init( |
| 37 | + project="my-project", |
| 38 | + webhook_url="https://hooks.slack.com/services/...", |
| 39 | + webhook_min_level=trackio.AlertLevel.WARN, # Only send WARN+ to webhook |
| 40 | +) |
| 41 | +``` |
| 42 | + |
| 43 | +Per-alert override: |
| 44 | + |
| 45 | +```python |
| 46 | +trackio.alert( |
| 47 | + title="Critical failure", |
| 48 | + level=trackio.AlertLevel.ERROR, |
| 49 | + webhook_url="https://hooks.slack.com/services/...", # Overrides global URL |
| 50 | +) |
| 51 | +``` |
| 52 | + |
| 53 | +Environment variables: |
| 54 | +- `TRACKIO_WEBHOOK_URL` — global webhook URL |
| 55 | +- `TRACKIO_WEBHOOK_MIN_LEVEL` — minimum level for webhook delivery (`info`, `warn`, `error`) |
| 56 | + |
| 57 | +## Retrieving Alerts (CLI) |
| 58 | + |
| 59 | +```bash |
| 60 | +# List all alerts for a project |
| 61 | +trackio list alerts --project my-project --json |
| 62 | + |
| 63 | +# Filter by run or level |
| 64 | +trackio list alerts --project my-project --run my-run --level error --json |
| 65 | + |
| 66 | +# Poll for new alerts since a timestamp (efficient for agents) |
| 67 | +trackio list alerts --project my-project --json --since "2025-06-01T12:00:00" |
| 68 | +``` |
| 69 | + |
| 70 | +### JSON Output Structure |
| 71 | + |
| 72 | +```json |
| 73 | +{ |
| 74 | + "project": "my-project", |
| 75 | + "run": null, |
| 76 | + "level": null, |
| 77 | + "since": "2025-06-01T12:00:00", |
| 78 | + "alerts": [ |
| 79 | + { |
| 80 | + "run": "run-name", |
| 81 | + "title": "Loss divergence", |
| 82 | + "text": "Loss 5.2 still high after 200 steps", |
| 83 | + "level": "warn", |
| 84 | + "step": 200, |
| 85 | + "timestamp": "2025-06-01T12:05:30" |
| 86 | + } |
| 87 | + ] |
| 88 | +} |
| 89 | +``` |
| 90 | + |
| 91 | +## Autonomous Agent Workflow |
| 92 | + |
| 93 | +The recommended pattern for an LLM agent running ML experiments: |
| 94 | + |
| 95 | +### 1. Insert Alerts Into Training Code |
| 96 | + |
| 97 | +Add diagnostic `trackio.alert()` calls for conditions the agent should react to: |
| 98 | + |
| 99 | +```python |
| 100 | +import trackio |
| 101 | + |
| 102 | +trackio.init(project="hyperparam-sweep", config={"lr": lr, "batch_size": bs}) |
| 103 | + |
| 104 | +for step in range(num_steps): |
| 105 | + loss = train_step() |
| 106 | + trackio.log({"loss": loss, "step": step}) |
| 107 | + |
| 108 | + if step > 200 and loss > 5.0: |
| 109 | + trackio.alert( |
| 110 | + title="Loss divergence", |
| 111 | + text=f"Loss {loss:.4f} still above 5.0 after {step} steps — learning rate may be too high", |
| 112 | + level=trackio.AlertLevel.ERROR, |
| 113 | + ) |
| 114 | + |
| 115 | + if step > 500 and loss_delta < 0.001: |
| 116 | + trackio.alert( |
| 117 | + title="Training stall", |
| 118 | + text=f"Loss barely changed over last 100 steps (delta={loss_delta:.6f})", |
| 119 | + level=trackio.AlertLevel.WARN, |
| 120 | + ) |
| 121 | + |
| 122 | + if math.isnan(loss): |
| 123 | + trackio.alert( |
| 124 | + title="NaN loss", |
| 125 | + text="Loss became NaN — training is broken", |
| 126 | + level=trackio.AlertLevel.ERROR, |
| 127 | + ) |
| 128 | + break |
| 129 | + |
| 130 | +trackio.finish() |
| 131 | +``` |
| 132 | + |
| 133 | +### 2. Monitor Alerts |
| 134 | + |
| 135 | +Alerts are automatically printed to the terminal when fired. If the agent is watching the training script's output (e.g. running in the foreground or tailing logs), it will see alerts immediately — no polling needed. |
| 136 | + |
| 137 | +For background or detached runs, poll for alerts via CLI: |
| 138 | + |
| 139 | +```bash |
| 140 | +# Poll for alerts (run periodically) |
| 141 | +trackio list alerts --project hyperparam-sweep --json --since "2025-06-01T00:00:00" |
| 142 | +``` |
| 143 | + |
| 144 | +### 3. Inspect Metrics Around the Alert |
| 145 | + |
| 146 | +When an alert fires, use `trackio get snapshot` to see all metrics at that point: |
| 147 | + |
| 148 | +```bash |
| 149 | +# Alert fired at step 200 — get all metrics in a ±5 step window |
| 150 | +trackio get snapshot --project hyperparam-sweep --run run-1 --around 200 --window 5 --json |
| 151 | + |
| 152 | +# Or inspect a single metric around the alert's timestamp |
| 153 | +trackio get metric --project hyperparam-sweep --run run-1 --metric loss --around 200 --window 10 --json |
| 154 | +``` |
| 155 | + |
| 156 | +### 4. React and Iterate |
| 157 | + |
| 158 | +Based on alerts: |
| 159 | +- **ERROR alerts** → stop the run, adjust hyperparameters, relaunch |
| 160 | +- **WARN alerts** → inspect metrics with `trackio get snapshot ...`, decide whether to intervene |
| 161 | +- **INFO alerts** → note progress, continue monitoring |
| 162 | + |
| 163 | +### 5. Compare Across Runs |
| 164 | + |
| 165 | +```bash |
| 166 | +# Check metrics from previous runs |
| 167 | +trackio get run --project hyperparam-sweep --run run-1 --json |
| 168 | +trackio get metric --project hyperparam-sweep --run run-1 --metric loss --json |
| 169 | + |
| 170 | +# Launch new run with adjusted config |
| 171 | +python train.py --lr 5e-5 |
| 172 | +``` |
| 173 | + |
| 174 | +## Using Alerts with Transformers / TRL |
| 175 | + |
| 176 | +When using `report_to="trackio"`, you don't control the training loop directly. Use a `TrainerCallback` to fire alerts: |
| 177 | + |
| 178 | +```python |
| 179 | +from transformers import TrainerCallback |
| 180 | + |
| 181 | +class AlertCallback(TrainerCallback): |
| 182 | + def on_log(self, args, state, control, logs=None, **kwargs): |
| 183 | + if "trackio" not in args.report_to: |
| 184 | + return |
| 185 | + if logs and "loss" in logs: |
| 186 | + if logs["loss"] > 5.0 and state.global_step > 100: |
| 187 | + trackio.alert( |
| 188 | + title="High loss", |
| 189 | + text=f"Loss {logs['loss']:.4f} at step {state.global_step}", |
| 190 | + level=trackio.AlertLevel.ERROR, |
| 191 | + ) |
| 192 | + |
| 193 | +trainer = SFTTrainer( |
| 194 | + model=model, |
| 195 | + args=SFTConfig(output_dir="./out", report_to="trackio"), |
| 196 | + callbacks=[AlertCallback()], |
| 197 | + ... |
| 198 | +) |
| 199 | +``` |
0 commit comments