Skip to content

Commit 18e9650

Browse files
abidlabsgradio-pr-botcursoragent
authored
Add alerts with webhooks, CLI, and documentation (#439)
Co-authored-by: gradio-pr-bot <gradio-pr-bot@users.noreply.github.com> Co-authored-by: Cursor <cursoragent@cursor.com>
1 parent 4a960f2 commit 18e9650

30 files changed

Lines changed: 3422 additions & 18 deletions

.agents/skills/trackio/SKILL.md

Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
---
2+
name: hugging-face-trackio
3+
description: Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.
4+
---
5+
6+
# Trackio - Experiment Tracking for ML Training
7+
8+
Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.
9+
10+
## Three Interfaces
11+
12+
| Task | Interface | Reference |
13+
|------|-----------|-----------|
14+
| **Logging metrics** during training | Python API | [logging_metrics.md](logging_metrics.md) |
15+
| **Firing alerts** for training diagnostics | Python API | [alerts.md](alerts.md) |
16+
| **Retrieving metrics & alerts** after/during training | CLI | [retrieving_metrics.md](retrieving_metrics.md) |
17+
18+
## When to Use Each
19+
20+
### Python API → Logging
21+
22+
Use `import trackio` in your training scripts to log metrics:
23+
24+
- Initialize tracking with `trackio.init()`
25+
- Log metrics with `trackio.log()` or use TRL's `report_to="trackio"`
26+
- Finalize with `trackio.finish()`
27+
28+
**Key concept**: For remote/cloud training, pass `space_id` — metrics sync to a Space dashboard so they persist after the instance terminates.
29+
30+
→ See [logging_metrics.md](logging_metrics.md) for setup, TRL integration, and configuration options.
31+
32+
### Python API → Alerts
33+
34+
Insert `trackio.alert()` calls in training code to flag important events — like inserting print statements for debugging, but structured and queryable:
35+
36+
- `trackio.alert(title="...", level=trackio.AlertLevel.WARN)` — fire an alert
37+
- Three severity levels: `INFO`, `WARN`, `ERROR`
38+
- Alerts are printed to terminal, stored in the database, shown in the dashboard, and optionally sent to webhooks (Slack/Discord)
39+
40+
**Key concept for LLM agents**: Alerts are the primary mechanism for autonomous experiment iteration. An agent should insert alerts into training code for diagnostic conditions (loss spikes, NaN gradients, low accuracy, training stalls). Since alerts are printed to the terminal, an agent that is watching the training script's output will see them automatically. For background or detached runs, the agent can poll via CLI instead.
41+
42+
→ See [alerts.md](alerts.md) for the full alerts API, webhook setup, and autonomous agent workflows.
43+
44+
### CLI → Retrieving
45+
46+
Use the `trackio` command to query logged metrics and alerts:
47+
48+
- `trackio list projects/runs/metrics` — discover what's available
49+
- `trackio get project/run/metric` — retrieve summaries and values
50+
- `trackio list alerts --project <name> --json` — retrieve alerts
51+
- `trackio show` — launch the dashboard
52+
- `trackio sync` — sync to HF Space
53+
54+
**Key concept**: Add `--json` for programmatic output suitable for automation and LLM agents.
55+
56+
→ See [retrieving_metrics.md](retrieving_metrics.md) for all commands, workflows, and JSON output formats.
57+
58+
## Minimal Logging Setup
59+
60+
```python
61+
import trackio
62+
63+
trackio.init(project="my-project", space_id="username/trackio")
64+
trackio.log({"loss": 0.1, "accuracy": 0.9})
65+
trackio.log({"loss": 0.09, "accuracy": 0.91})
66+
trackio.finish()
67+
```
68+
69+
### Minimal Retrieval
70+
71+
```bash
72+
trackio list projects --json
73+
trackio get metric --project my-project --run my-run --metric loss --json
74+
```
75+
76+
## Autonomous ML Experiment Workflow
77+
78+
When running experiments autonomously as an LLM agent, the recommended workflow is:
79+
80+
1. **Set up training with alerts** — insert `trackio.alert()` calls for diagnostic conditions
81+
2. **Launch training** — run the script in the background
82+
3. **Poll for alerts** — use `trackio list alerts --project <name> --json --since <timestamp>` to check for new alerts
83+
4. **Read metrics** — use `trackio get metric ...` to inspect specific values
84+
5. **Iterate** — based on alerts and metrics, stop the run, adjust hyperparameters, and launch a new run
85+
86+
```python
87+
import trackio
88+
89+
trackio.init(project="my-project", config={"lr": 1e-4})
90+
91+
for step in range(num_steps):
92+
loss = train_step()
93+
trackio.log({"loss": loss, "step": step})
94+
95+
if step > 100 and loss > 5.0:
96+
trackio.alert(
97+
title="Loss divergence",
98+
text=f"Loss {loss:.4f} still high after {step} steps",
99+
level=trackio.AlertLevel.ERROR,
100+
)
101+
if step > 0 and abs(loss) < 1e-8:
102+
trackio.alert(
103+
title="Vanishing loss",
104+
text="Loss near zero — possible gradient collapse",
105+
level=trackio.AlertLevel.WARN,
106+
)
107+
108+
trackio.finish()
109+
```
110+
111+
Then poll from a separate terminal/process:
112+
113+
```bash
114+
trackio list alerts --project my-project --json --since "2025-01-01T00:00:00"
115+
```

.agents/skills/trackio/alerts.md

Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
# Trackio Alerts
2+
3+
Alerts let you flag important training events directly from code. They are the primary mechanism for LLM agents to diagnose runs and iterate autonomously on ML experiments.
4+
5+
Alerts are printed to the terminal, stored in the database, displayed in the dashboard, and optionally sent to webhooks (Slack/Discord).
6+
7+
<img width="2972" height="1694" alt="image" src="https://github.com/user-attachments/assets/02d938f8-51a9-4706-85c4-d95b7645bcf4" />
8+
9+
10+
## Core API
11+
12+
### trackio.alert()
13+
14+
```python
15+
trackio.alert(
16+
title="Loss divergence", # Short title (required)
17+
text="Loss 5.2 still high after 200 steps", # Detailed description (optional)
18+
level=trackio.AlertLevel.WARN, # INFO, WARN, or ERROR (default: WARN)
19+
webhook_url="https://hooks.slack.com/...", # Per-alert webhook override (optional)
20+
)
21+
```
22+
23+
### Alert Levels
24+
25+
| Level | Usage |
26+
|-------|-------|
27+
| `trackio.AlertLevel.INFO` | Informational milestones (checkpoints saved, eval completed) |
28+
| `trackio.AlertLevel.WARN` | Potential issues (loss plateau, low accuracy, high gradient norm) |
29+
| `trackio.AlertLevel.ERROR` | Critical failures (NaN loss, divergence, OOM) |
30+
31+
### Webhook Support
32+
33+
Set a global webhook URL via `trackio.init()` or the `TRACKIO_WEBHOOK_URL` environment variable. Alerts are auto-formatted for Slack and Discord URLs.
34+
35+
```python
36+
trackio.init(
37+
project="my-project",
38+
webhook_url="https://hooks.slack.com/services/...",
39+
webhook_min_level=trackio.AlertLevel.WARN, # Only send WARN+ to webhook
40+
)
41+
```
42+
43+
Per-alert override:
44+
45+
```python
46+
trackio.alert(
47+
title="Critical failure",
48+
level=trackio.AlertLevel.ERROR,
49+
webhook_url="https://hooks.slack.com/services/...", # Overrides global URL
50+
)
51+
```
52+
53+
Environment variables:
54+
- `TRACKIO_WEBHOOK_URL` — global webhook URL
55+
- `TRACKIO_WEBHOOK_MIN_LEVEL` — minimum level for webhook delivery (`info`, `warn`, `error`)
56+
57+
## Retrieving Alerts (CLI)
58+
59+
```bash
60+
# List all alerts for a project
61+
trackio list alerts --project my-project --json
62+
63+
# Filter by run or level
64+
trackio list alerts --project my-project --run my-run --level error --json
65+
66+
# Poll for new alerts since a timestamp (efficient for agents)
67+
trackio list alerts --project my-project --json --since "2025-06-01T12:00:00"
68+
```
69+
70+
### JSON Output Structure
71+
72+
```json
73+
{
74+
"project": "my-project",
75+
"run": null,
76+
"level": null,
77+
"since": "2025-06-01T12:00:00",
78+
"alerts": [
79+
{
80+
"run": "run-name",
81+
"title": "Loss divergence",
82+
"text": "Loss 5.2 still high after 200 steps",
83+
"level": "warn",
84+
"step": 200,
85+
"timestamp": "2025-06-01T12:05:30"
86+
}
87+
]
88+
}
89+
```
90+
91+
## Autonomous Agent Workflow
92+
93+
The recommended pattern for an LLM agent running ML experiments:
94+
95+
### 1. Insert Alerts Into Training Code
96+
97+
Add diagnostic `trackio.alert()` calls for conditions the agent should react to:
98+
99+
```python
100+
import trackio
101+
102+
trackio.init(project="hyperparam-sweep", config={"lr": lr, "batch_size": bs})
103+
104+
for step in range(num_steps):
105+
loss = train_step()
106+
trackio.log({"loss": loss, "step": step})
107+
108+
if step > 200 and loss > 5.0:
109+
trackio.alert(
110+
title="Loss divergence",
111+
text=f"Loss {loss:.4f} still above 5.0 after {step} steps — learning rate may be too high",
112+
level=trackio.AlertLevel.ERROR,
113+
)
114+
115+
if step > 500 and loss_delta < 0.001:
116+
trackio.alert(
117+
title="Training stall",
118+
text=f"Loss barely changed over last 100 steps (delta={loss_delta:.6f})",
119+
level=trackio.AlertLevel.WARN,
120+
)
121+
122+
if math.isnan(loss):
123+
trackio.alert(
124+
title="NaN loss",
125+
text="Loss became NaN — training is broken",
126+
level=trackio.AlertLevel.ERROR,
127+
)
128+
break
129+
130+
trackio.finish()
131+
```
132+
133+
### 2. Monitor Alerts
134+
135+
Alerts are automatically printed to the terminal when fired. If the agent is watching the training script's output (e.g. running in the foreground or tailing logs), it will see alerts immediately — no polling needed.
136+
137+
For background or detached runs, poll for alerts via CLI:
138+
139+
```bash
140+
# Poll for alerts (run periodically)
141+
trackio list alerts --project hyperparam-sweep --json --since "2025-06-01T00:00:00"
142+
```
143+
144+
### 3. Inspect Metrics Around the Alert
145+
146+
When an alert fires, use `trackio get snapshot` to see all metrics at that point:
147+
148+
```bash
149+
# Alert fired at step 200 — get all metrics in a ±5 step window
150+
trackio get snapshot --project hyperparam-sweep --run run-1 --around 200 --window 5 --json
151+
152+
# Or inspect a single metric around the alert's timestamp
153+
trackio get metric --project hyperparam-sweep --run run-1 --metric loss --around 200 --window 10 --json
154+
```
155+
156+
### 4. React and Iterate
157+
158+
Based on alerts:
159+
- **ERROR alerts** → stop the run, adjust hyperparameters, relaunch
160+
- **WARN alerts** → inspect metrics with `trackio get snapshot ...`, decide whether to intervene
161+
- **INFO alerts** → note progress, continue monitoring
162+
163+
### 5. Compare Across Runs
164+
165+
```bash
166+
# Check metrics from previous runs
167+
trackio get run --project hyperparam-sweep --run run-1 --json
168+
trackio get metric --project hyperparam-sweep --run run-1 --metric loss --json
169+
170+
# Launch new run with adjusted config
171+
python train.py --lr 5e-5
172+
```
173+
174+
## Using Alerts with Transformers / TRL
175+
176+
When using `report_to="trackio"`, you don't control the training loop directly. Use a `TrainerCallback` to fire alerts:
177+
178+
```python
179+
from transformers import TrainerCallback
180+
181+
class AlertCallback(TrainerCallback):
182+
def on_log(self, args, state, control, logs=None, **kwargs):
183+
if "trackio" not in args.report_to:
184+
return
185+
if logs and "loss" in logs:
186+
if logs["loss"] > 5.0 and state.global_step > 100:
187+
trackio.alert(
188+
title="High loss",
189+
text=f"Loss {logs['loss']:.4f} at step {state.global_step}",
190+
level=trackio.AlertLevel.ERROR,
191+
)
192+
193+
trainer = SFTTrainer(
194+
model=model,
195+
args=SFTConfig(output_dir="./out", report_to="trackio"),
196+
callbacks=[AlertCallback()],
197+
...
198+
)
199+
```

0 commit comments

Comments
 (0)