-
Notifications
You must be signed in to change notification settings - Fork 4.4k
02.2 Onboarding Wizard
Relevant source files
The following files were used as context for generating this wiki page:
The onboarding wizard provides an interactive setup experience for first-time ZeroClaw users, guiding them through provider selection, channel configuration, security setup, and workspace initialization. This document covers the wizard's architecture, execution flow, and configuration generation process.
For information about the configuration file structure generated by the wizard, see Configuration File Reference. For details on using channels after setup, see Channel Implementations.
ZeroClaw's onboarding system supports three distinct modes, each serving different use cases:
| Mode | Entry Point | Use Case | Prompts | Duration |
|---|---|---|---|---|
| Interactive | zeroclaw onboard --interactive |
First-time setup with full customization | 9 steps | ~60 seconds |
| Quick Setup |
zeroclaw onboard (no flags) |
Automated setup with defaults | 0 prompts | ~2 seconds |
| Channels Repair | zeroclaw onboard --channels-only |
Update channel tokens without full re-onboarding | 1 step | ~15 seconds |
The full wizard workflow is implemented in run_wizard() src/onboard/wizard.rs:61-196 and presents a sequential 9-step configuration flow:
flowchart TD
Start["run_wizard()"] --> Banner["Display ASCII Banner"]
Banner --> Step1["Step 1: Workspace Setup<br/>setup_workspace()"]
Step1 --> Step2["Step 2: Provider & API Key<br/>setup_provider()"]
Step2 --> Step3["Step 3: Channels<br/>setup_channels()"]
Step3 --> Step4["Step 4: Tunnel<br/>setup_tunnel()"]
Step4 --> Step5["Step 5: Tool Mode & Security<br/>setup_tool_mode()"]
Step5 --> Step6["Step 6: Hardware<br/>setup_hardware()"]
Step6 --> Step7["Step 7: Memory<br/>setup_memory()"]
Step7 --> Step8["Step 8: Project Context<br/>setup_project_context()"]
Step8 --> Step9["Step 9: Scaffold Workspace<br/>scaffold_workspace()"]
Step9 --> BuildConfig["Build Config struct"]
BuildConfig --> SaveConfig["config.save()"]
SaveConfig --> PersistMarker["persist_workspace_selection()"]
PersistMarker --> Summary["print_summary()"]
Summary --> AutoLaunch{"Launch channels<br/>immediately?"}
AutoLaunch -->|Yes| SetEnv["Set ZEROCLAW_AUTOSTART_CHANNELS=1"]
AutoLaunch -->|No| End["Return Config"]
SetEnv --> End
Sources: src/onboard/wizard.rs:61-196
Non-interactive configuration generation via run_quick_setup() src/onboard/wizard.rs:301-463:
flowchart LR
Start["run_quick_setup()"] --> ResolveArgs["Parse CLI args:<br/>--api-key<br/>--provider<br/>--memory"]
ResolveArgs --> Defaults["Apply defaults:<br/>provider=openrouter<br/>memory=sqlite<br/>model=auto"]
Defaults --> CreateDirs["Create ~/.zeroclaw/<br/>workspace/"]
CreateDirs --> BuildConfig["Build Config with<br/>AutonomyConfig::default()<br/>SecretsConfig::default()<br/>etc."]
BuildConfig --> SaveConfig["config.save()"]
SaveConfig --> ScaffoldMin["Scaffold minimal<br/>workspace files"]
ScaffoldMin --> PrintSummary["Print next steps"]
PrintSummary --> Return["Return Config"]
Sources: src/onboard/wizard.rs:301-463
Updates channel configuration without redoing full onboarding via run_channels_repair_wizard() src/onboard/wizard.rs:199-255:
flowchart TD
Start["run_channels_repair_wizard()"] --> Load["Config::load_or_init()"]
Load --> Prompt["setup_channels()"]
Prompt --> Update["Update config.channels_config"]
Update --> Save["config.save()"]
Save --> Persist["persist_workspace_selection()"]
Persist --> AutoLaunch{"Launch channels?"}
AutoLaunch -->|Yes| SetFlag["ZEROCLAW_AUTOSTART_CHANNELS=1"]
AutoLaunch -->|No| Done
SetFlag --> Done["Return Config"]
Sources: src/onboard/wizard.rs:199-255
Function: setup_workspace() src/onboard/wizard.rs:1446-1484
Creates the ZeroClaw workspace directory structure:
~/.zeroclaw/
├── config.toml # Main configuration file
├── workspace/ # Working directory for agent
│ ├── AGENTS.md # Agent definitions (scaffolded in Step 9)
│ ├── SOUL.md # Identity & personality
│ ├── TOOLS.md # Tool descriptions
│ ├── IDENTITY.md # Agent identity
│ ├── USER.md # User preferences
│ └── MEMORY.md # Long-term memory
└── state/ # Runtime state (created automatically)
├── brain.db # SQLite memory backend
└── models_cache.json # Cached model lists
Default workspace location is ~/.zeroclaw/workspace but can be customized. The wizard prompts:
- "Use default workspace location?" (Yes/No)
- If No: "Enter workspace path" (accepts
~expansion viashellexpand::tilde)
Sources: src/onboard/wizard.rs:1446-1484
Function: setup_provider() src/onboard/wizard.rs:1489-1987
This is the most complex wizard step, implementing multi-tiered provider selection with live model discovery.
flowchart TD
Start["setup_provider()"] --> TierSelect["Select provider tier:<br/>⭐ Recommended<br/>⚡ Fast inference<br/>🌐 Gateway/proxy<br/>🔬 Specialized<br/>🏠 Local<br/>🔧 Custom"]
TierSelect -->|Custom| CustomFlow["Manual entry:<br/>- Base URL<br/>- API key<br/>- Model name"]
TierSelect -->|Other| ProviderList["Select from<br/>curated provider list"]
ProviderList --> APIKeyPrompt{"Provider needs<br/>API key?"}
APIKeyPrompt -->|Ollama| OllamaFlow["Remote Ollama?<br/>- Yes: URL + key<br/>- No: localhost"]
APIKeyPrompt -->|Gemini| GeminiFlow["Check CLI credentials<br/>has_cli_credentials()?"]
GeminiFlow -->|Found| CliAuth["Use CLI OAuth tokens"]
GeminiFlow -->|Not found| ManualKey["Prompt for API key"]
APIKeyPrompt -->|Other| PromptKey["Prompt for API key<br/>or skip"]
OllamaFlow --> ModelSelect
CliAuth --> ModelSelect
ManualKey --> ModelSelect
PromptKey --> ModelSelect
ModelSelect["Model Selection"] --> LiveFetch{"supports_live_model_fetch()?"}
LiveFetch -->|Yes| CheckCache["load_cached_models_for_provider()"]
CheckCache -->|Found| AskRefresh["Refresh models now?"]
CheckCache -->|Not found| FetchNow["fetch_live_models_for_provider()"]
AskRefresh -->|Yes| FetchNow
AskRefresh -->|No| UseCached["Use cached models"]
FetchNow -->|Success| CacheModels["cache_live_models_for_provider()"]
FetchNow -->|Failure| Fallback["Use curated list"]
CacheModels --> ChooseSource["Source: Provider list<br/>or Curated list?"]
UseCached --> ChooseSource
LiveFetch -->|No| CuratedOnly["Use curated<br/>model list only"]
Fallback --> CuratedOnly
CuratedOnly --> SelectModel
ChooseSource --> SelectModel["Select model<br/>from dropdown"]
SelectModel --> CustomSentinel{"model ==<br/>CUSTOM_MODEL_SENTINEL?"}
CustomSentinel -->|Yes| ManualModel["Prompt:<br/>Enter custom model ID"]
CustomSentinel -->|No| Done
ManualModel --> Done["Return:<br/>(provider, api_key,<br/>model, api_url)"]
Sources: src/onboard/wizard.rs:1489-1987
The wizard organizes 28+ providers into 6 tiers src/onboard/wizard.rs:1491-1572:
| Tier | Providers | Description |
|---|---|---|
| Recommended |
openrouter, venice, anthropic, openai, openai-codex, deepseek, mistral, xai, perplexity, gemini
|
General-purpose production providers with broad model catalogs |
| Fast Inference |
groq, fireworks, together-ai, nvidia
|
Low-latency providers for speed-critical applications |
| Gateway/Proxy |
vercel, cloudflare, astrai, bedrock
|
Multi-provider routing and compliance layers |
| Specialized |
kimi-code, moonshot, glm, minimax, qwen, qianfan, zai, synthetic, opencode, cohere
|
Region-specific or domain-specific providers |
| Local | ollama |
On-device inference (no API key required) |
| Custom | User-defined | Any OpenAI-compatible API endpoint |
The wizard attempts to fetch real-time model lists from provider APIs when available. The model fetching architecture uses a three-tier caching strategy:
flowchart TD
Request["fetch_live_models_for_provider()"] --> ProviderCheck{"Provider supports<br/>live fetch?"}
ProviderCheck -->|No| ReturnEmpty["return Vec::new()"]
ProviderCheck -->|Yes| EndpointResolve["models_endpoint_for_provider()"]
EndpointResolve --> ProviderType{Provider type}
ProviderType -->|openrouter| FetchOR["fetch_openrouter_models()"]
ProviderType -->|anthropic| FetchAnthropic["fetch_anthropic_models()"]
ProviderType -->|gemini| FetchGemini["fetch_gemini_models()"]
ProviderType -->|ollama| OllamaCheck{"API key present?"}
OllamaCheck -->|No| LocalOllama["fetch_ollama_models()<br/>localhost:11434/api/tags"]
OllamaCheck -->|Yes| CloudOllama["Return hardcoded<br/>Ollama Cloud list"]
ProviderType -->|other| FetchOAI["fetch_openai_compatible_models()"]
FetchOR --> ParseJSON["parse_openai_compatible_model_ids()"]
FetchAnthropic --> HandleOAuth{"Key starts with<br/>sk-ant-oat01-?"}
HandleOAuth -->|Yes| OAuthHeader["Authorization: Bearer<br/>+ anthropic-beta header"]
HandleOAuth -->|No| APIKeyHeader["x-api-key header"]
OAuthHeader --> ParseJSON
APIKeyHeader --> ParseJSON
FetchGemini --> ParseGemini["parse_gemini_model_ids()<br/>filter by supportedGenerationMethods"]
LocalOllama --> ParseOllama["parse_ollama_model_ids()"]
CloudOllama --> ReturnList
FetchOAI --> ParseJSON
ParseJSON --> ReturnList["Return Vec<String>"]
ParseGemini --> ReturnList
ParseOllama --> ReturnList
Sources: src/onboard/wizard.rs:1099-1156, src/onboard/wizard.rs:1016-1097
The wizard maintains a persistent cache at workspace/state/models_cache.json to avoid repeated API calls:
Cache Entry Structure:
#[derive(Serialize, Deserialize)]
struct ModelCacheEntry {
provider: String,
fetched_at_unix: u64,
models: Vec<String>,
}
#[derive(Serialize, Deserialize)]
struct ModelCacheState {
entries: Vec<ModelCacheEntry>,
}Cache Operations:
-
load_cached_models_for_provider(workspace_dir, provider_name, ttl_secs)- Returns cached models if TTL not exceeded -
cache_live_models_for_provider(workspace_dir, provider_name, models)- Persists fetched models to disk -
load_any_cached_models_for_provider(workspace_dir, provider_name)- Returns stale cache (no TTL check)
Cache TTL is MODEL_CACHE_TTL_SECS = 12 * 60 * 60 (12 hours) src/onboard/wizard.rs:56.
Sources: src/onboard/wizard.rs:1158-1294
Function: setup_channels() src/onboard/wizard.rs:2304-2772
Configures messaging platform integrations. The wizard prompts for each channel type and collects platform-specific credentials.
Supported Channel Types:
| Channel | Config Type | Required Fields | Optional Fields |
|---|---|---|---|
| Telegram | TelegramConfig |
bot_token, allowed_users
|
stream_mode, draft_update_interval_ms, mention_only
|
| Discord | DiscordConfig |
bot_token |
guild_id, allowed_users, listen_to_bots, mention_only
|
| Slack | SlackConfig |
bot_token |
app_token, channel_id, allowed_users
|
| Matrix | MatrixConfig |
homeserver, access_token, room_id, allowed_users
|
user_id, device_id
|
WhatsAppConfig |
access_token, phone_number_id, verify_token
|
app_secret, allowed_numbers
|
|
EmailConfig |
imap_server, smtp_server, username, password
|
imap_port, smtp_port, etc. |
|
| IRC | IrcConfig |
server, nickname, channels
|
server_password, nickserv_password, sasl_password
|
| Lark/Feishu | LarkConfig |
app_id, app_secret
|
encrypt_key, verification_token, receive_mode
|
| DingTalk | DingTalkConfig |
client_id, client_secret
|
allowed_users |
QQConfig |
app_id, app_secret
|
allowed_users |
The wizard offers three paths for each channel:
- Enter credentials manually - Prompts for required fields
-
Import from environment variables - Reads from
{CHANNEL}_*env vars -
Skip this channel - Leaves config as
None
Sources: src/onboard/wizard.rs:2304-2772, src/config/schema.rs:1965-2004
Function: setup_tunnel() src/onboard/wizard.rs:2774-2926
Configures secure tunneling for exposing the local gateway to the internet. Supports four tunnel providers:
flowchart TD
Start["setup_tunnel()"] --> Prompt["Select tunnel provider"]
Prompt --> Choice{Provider}
Choice -->|none| None["TunnelConfig::default()<br/>provider='none'"]
Choice -->|cloudflare| CFPrompt["Enter Cloudflare<br/>Tunnel token"]
Choice -->|tailscale| TSPrompt["Funnel (public)<br/>or Serve (tailnet)?"]
Choice -->|ngrok| NgrokPrompt["Enter ngrok<br/>auth token"]
Choice -->|custom| CustomPrompt["Enter custom<br/>start command"]
CFPrompt --> CFConfig["CloudflareTunnelConfig{<br/>token: String<br/>}"]
TSPrompt --> TSConfig["TailscaleTunnelConfig{<br/>funnel: bool<br/>hostname: Option<String><br/>}"]
NgrokPrompt --> NgrokConfig["NgrokTunnelConfig{<br/>auth_token: String<br/>domain: Option<String><br/>}"]
CustomPrompt --> CustomConfig["CustomTunnelConfig{<br/>start_command: String<br/>health_url: Option<String><br/>url_pattern: Option<String><br/>}"]
None --> Return["TunnelConfig"]
CFConfig --> Return
TSConfig --> Return
NgrokConfig --> Return
CustomConfig --> Return
Custom Tunnel Command Templating:
The start_command field supports placeholders:
-
{port}- Replaced with gateway port fromconfig.gateway.port -
{host}- Replaced with gateway host fromconfig.gateway.host
Example: "bore local {port} --to bore.pub"
Sources: src/onboard/wizard.rs:2774-2926, src/config/schema.rs:1899-1961
Function: setup_tool_mode() src/onboard/wizard.rs:2026-2112
Configures external tool integrations and secret encryption:
flowchart LR
Start["setup_tool_mode()"] --> ToolChoice["Sovereign or<br/>Composio?"]
ToolChoice -->|Sovereign| SovereignConfig["ComposioConfig{<br/>enabled: false<br/>}"]
ToolChoice -->|Composio| ComposioPrompt["Enter Composio<br/>API key"]
ComposioPrompt --> ComposioConfig["ComposioConfig{<br/>enabled: true<br/>api_key: Some(key)<br/>entity_id: 'default'<br/>}"]
SovereignConfig --> EncryptPrompt
ComposioConfig --> EncryptPrompt["Enable encrypted<br/>secret storage?"]
EncryptPrompt -->|Yes| EncryptOn["SecretsConfig{<br/>encrypt: true<br/>}"]
EncryptPrompt -->|No| EncryptOff["SecretsConfig{<br/>encrypt: false<br/>}"]
EncryptOn --> Return["Return (ComposioConfig,<br/>SecretsConfig)"]
EncryptOff --> Return
Composio vs Sovereign Mode:
| Mode | Key Management | OAuth Support | Privacy | Use Case |
|---|---|---|---|---|
| Sovereign | Manual (env vars / config) | No | Full local control | Self-hosted, security-critical |
| Composio | Managed OAuth | 1000+ apps | Credentials shared with Composio | Rapid integration, convenience |
When secrets.encrypt = true, the system uses SecretStore with ChaCha20-Poly1305 AEAD encryption. The encryption key is stored at ~/.zeroclaw/.secret_key. See Secret Management for details.
Sources: src/onboard/wizard.rs:2026-2112, src/config/schema.rs:598-641
Function: setup_hardware() src/onboard/wizard.rs:2116-2302
Configures physical hardware connections for embedded systems and IoT devices:
flowchart TD
Start["setup_hardware()"] --> Discover["hardware::discover_hardware()"]
Discover --> Display["Display discovered<br/>devices with transport"]
Display --> ModePrompt["Select interaction mode:<br/>🚀 Native GPIO<br/>🔌 Tethered USB<br/>🔬 Debug Probe<br/>☁️ Software Only"]
ModePrompt -->|Native| NativeConfig["HardwareConfig{<br/>enabled: true<br/>transport: Native<br/>}"]
ModePrompt -->|Tethered| SerialPrompt["Select serial port"]
SerialPrompt --> BaudPrompt["Baud rate? (115200)"]
BaudPrompt --> SerialConfig["HardwareConfig{<br/>enabled: true<br/>transport: Serial<br/>serial_port: Some(path)<br/>baud_rate: 115200<br/>}"]
ModePrompt -->|Probe| ProbePrompt["Select probe target"]
ProbePrompt --> ProbeConfig["HardwareConfig{<br/>enabled: true<br/>transport: Probe<br/>probe_target: Some(chip)<br/>}"]
ModePrompt -->|Software Only| DisabledConfig["HardwareConfig{<br/>enabled: false<br/>transport: None<br/>}"]
NativeConfig --> DatasheetRAG
SerialConfig --> DatasheetRAG
ProbeConfig --> DatasheetRAG
DisabledConfig --> Return
DatasheetRAG["Enable workspace<br/>datasheet RAG?"] -->|Yes| RAGOn["workspace_datasheets: true"]
DatasheetRAG -->|No| RAGOff["workspace_datasheets: false"]
RAGOn --> Return["Return HardwareConfig"]
RAGOff --> Return
Device Discovery:
The wizard calls hardware::discover_hardware() which scans for:
- GPIO-capable SBCs (Raspberry Pi, Orange Pi, etc.) via
/sys/class/gpio - Serial devices (Arduino, ESP32, STM32) via
/dev/ttyACM*,/dev/ttyUSB* - Debug probes (ST-Link, J-Link) via USB VID/PID matching
Each discovered device returns a DiscoveredDevice struct:
pub struct DiscoveredDevice {
pub name: String,
pub device_path: Option<String>,
pub transport: HardwareTransport,
pub detail: Option<String>,
}Datasheet RAG:
When workspace_datasheets: true, ZeroClaw indexes PDF datasheets placed in workspace/datasheets/ and enables the agent to query pin mappings and electrical specifications during hardware interactions.
Sources: src/onboard/wizard.rs:2116-2302, src/config/schema.rs:196-241
Function: setup_memory() src/onboard/wizard.rs:2928-3021
Selects the persistent memory backend for conversation history and long-term knowledge:
flowchart TD
Start["setup_memory()"] --> Options["Display selectable<br/>backends from<br/>selectable_memory_backends()"]
Options --> Select["User selects:<br/>- SQLite (recommended)<br/>- Lucid (vector+keyword)<br/>- PostgreSQL (remote)<br/>- Markdown (human-readable)<br/>- None (ephemeral)"]
Select --> ProfileLookup["memory_backend_profile(backend)"]
ProfileLookup --> BuildConfig["MemoryConfig{<br/>backend: key<br/>auto_save: profile.auto_save_default<br/>hygiene_enabled: profile.uses_sqlite_hygiene<br/>embedding_provider: 'none'<br/>...defaults...<br/>}"]
BuildConfig --> SQLiteCheck{backend == 'sqlite'?}
SQLiteCheck -->|Yes| OpenTimeoutPrompt["Set open timeout?<br/>(prevent lockups)"]
OpenTimeoutPrompt -->|Yes| SetTimeout["sqlite_open_timeout_secs:<br/>Some(300)"]
OpenTimeoutPrompt -->|No| NoTimeout["sqlite_open_timeout_secs:<br/>None"]
SetTimeout --> Return
NoTimeout --> Return
SQLiteCheck -->|No| Return["Return MemoryConfig"]
Backend Profile Matrix:
| Backend | Profile Key | Auto-save Default | Hygiene | Vector Search | Keyword Search | Use Case |
|---|---|---|---|---|---|---|
| SQLite | sqlite |
true |
Yes | Optional (embeddings) | FTS5 | General-purpose, local, full-stack |
| Lucid | lucid |
true |
No | Required | Built-in | Advanced search, RAG-heavy |
| PostgreSQL | postgres |
true |
No | Via pgvector extension | Via tsvector | Multi-user, remote |
| Markdown | markdown |
false |
No | No | Grep-like | Human-readable, git-friendly |
| None | none |
false |
No | No | No | Stateless, ephemeral |
The wizard calls memory_backend_profile(backend) [src/memory/mod.rs] to retrieve defaults for each backend. These profiles define:
-
auto_save_default: bool- Whether to auto-save conversations -
uses_sqlite_hygiene: bool- Whether to run hygiene passes (archiving old files) -
description: &'static str- Human-readable description for wizard display
Sources: src/onboard/wizard.rs:2928-3021, [src/memory/mod.rs] (referenced)
Function: setup_project_context() src/onboard/wizard.rs:3023-3083
Collects personalization data to scaffold identity files in the workspace:
#[derive(Debug, Clone, Default)]
pub struct ProjectContext {
pub user_name: String, // User's name
pub timezone: String, // User's timezone (e.g. "America/Los_Angeles")
pub agent_name: String, // Agent's name (default: "ZeroClaw")
pub communication_style: String, // Personality description
}The wizard prompts for:
-
User name - Stored in
USER.md - Timezone - Used for date/time context in system prompts
-
Agent name - Becomes the agent's identity in
IDENTITY.md - Communication style - Free-form personality description
Default communication style:
"Be warm, natural, and clear. Use occasional relevant emojis (1-2 max) and avoid robotic phrasing."
Sources: src/onboard/wizard.rs:28-34, src/onboard/wizard.rs:3023-3083
Function: scaffold_workspace() src/onboard/wizard.rs:3085-3363
Creates identity and context files in the workspace directory using the ProjectContext from Step 8:
Generated Files:
| File | Purpose | Content Source |
|---|---|---|
AGENTS.md |
Agent definitions and specialization | Template populated with agent_name
|
SOUL.md |
Core personality and values | Template populated with communication_style
|
TOOLS.md |
Tool usage philosophy | Static template |
IDENTITY.md |
Agent identity and capabilities | Template with agent_name
|
USER.md |
User preferences and context | Template with user_name and timezone
|
MEMORY.md |
Long-term memory (initially empty) | Empty file |
BOOTSTRAP.md |
First-run instructions (optional) | Created only if doesn't exist |
These files are loaded into the system prompt by build_system_prompt() src/channels/mod.rs:888-1058 during agent initialization. See System Prompt Construction for details.
Sources: src/onboard/wizard.rs:3085-3363, src/channels/mod.rs:888-1058
The wizard builds a Config struct by assembling subsystem configurations from each step:
flowchart TD
Start["Wizard Step Results"] --> Assemble["Assemble Config struct"]
Assemble --> CoreFields["Core fields:<br/>- workspace_dir (Step 1)<br/>- config_path (Step 1)<br/>- api_key (Step 2)<br/>- default_provider (Step 2)<br/>- default_model (Step 2)<br/>- default_temperature (0.7)"]
Assemble --> SubsystemConfigs["Subsystem configs:<br/>- channels_config (Step 3)<br/>- tunnel (Step 4)<br/>- composio (Step 5)<br/>- secrets (Step 5)<br/>- hardware (Step 6)<br/>- memory (Step 7)"]
Assemble --> DefaultConfigs["Default configs:<br/>- autonomy (Supervised)<br/>- runtime (Native)<br/>- reliability (2 retries)<br/>- scheduler (enabled)<br/>- gateway (port 3000)<br/>- observability (none)<br/>- browser (disabled)<br/>- http_request (disabled)"]
CoreFields --> MergeConfig["Config { ... }"]
SubsystemConfigs --> MergeConfig
DefaultConfigs --> MergeConfig
MergeConfig --> Serialize["Serialize to TOML"]
Serialize --> Encrypt{"secrets.encrypt?"}
Encrypt -->|Yes| EncryptKeys["SecretStore::encrypt()<br/>API keys → ciphertext"]
Encrypt -->|No| PlaintextKeys["API keys → plaintext"]
EncryptKeys --> Write
PlaintextKeys --> Write["Write to<br/>~/.zeroclaw/config.toml"]
Write --> Marker["Write active_workspace.toml<br/>in ~/.zeroclaw/"]
Marker --> Done["Config saved"]
Sources: src/onboard/wizard.rs:105-158, src/config/schema.rs:48-144
The Config::save() method [src/config/schema.rs] performs:
-
Secret encryption (if
secrets.encrypt = true):- API keys wrapped in
SecretStore::encrypt() - Ciphertext stored in config.toml
- Encryption key stored in
~/.zeroclaw/.secret_key
- API keys wrapped in
-
TOML serialization:
-
config_pathandworkspace_dirfields are#[serde(skip)] - Subsystem configs serialized as nested tables
- Empty
Option<T>fields omitted from output
-
-
Active workspace marker:
-
persist_workspace_selection()writesactive_workspace.toml - Contains single field:
config_dir = "~/.zeroclaw" - Allows multi-workspace support via
ZEROCLAW_WORKSPACEenv var
-
Sources: [src/config/schema.rs] (referenced for save() implementation)
The wizard supports 28+ LLM providers across 6 tiers. Provider-specific logic includes:
Function: provider_env_var() src/onboard/wizard.rs:1990-2022
Maps canonical provider names to conventional environment variables:
match canonical_provider_name(name) {
"openrouter" => "OPENROUTER_API_KEY",
"anthropic" => "ANTHROPIC_API_KEY",
"openai" => "OPENAI_API_KEY",
"ollama" => "OLLAMA_API_KEY",
"venice" => "VENICE_API_KEY",
"groq" => "GROQ_API_KEY",
"mistral" => "MISTRAL_API_KEY",
// ... 20+ more mappings
}Sources: src/onboard/wizard.rs:1990-2022
Function: default_model_for_provider() src/onboard/wizard.rs:496-522
Provides sensible model defaults when user skips model selection:
| Provider | Default Model | Rationale |
|---|---|---|
anthropic |
claude-sonnet-4-5-20250929 |
Balanced capability/cost |
openrouter |
anthropic/claude-sonnet-4.6 |
Widely available flagship |
openai |
gpt-5.2 |
Latest flagship |
ollama |
llama3.2 |
Most popular local model |
groq |
llama-3.3-70b-versatile |
Best speed/quality |
deepseek |
deepseek-chat |
Maps to V3.2 |
gemini |
gemini-2.5-pro |
Production-ready reasoning |
Sources: src/onboard/wizard.rs:496-522
Function: curated_models_for_provider() src/onboard/wizard.rs:524-855
Each provider has a hand-picked list of recommended models for the wizard dropdown. Example for anthropic:
vec![
("claude-sonnet-4-5-20250929", "Claude Sonnet 4.5 (balanced, recommended)"),
("claude-opus-4-6", "Claude Opus 4.6 (best quality)"),
("claude-haiku-4-5-20251001", "Claude Haiku 4.5 (fastest, cheapest)"),
]These lists are shown as fallback when live model fetching fails or is unsupported.
Sources: src/onboard/wizard.rs:524-855
Special authentication flows for specific providers:
Anthropic:
- Detects OAuth setup-tokens via prefix
sk-ant-oat01- - Falls back to
ANTHROPIC_OAUTH_TOKENenv var - Uses
Authorization: Bearerheader instead ofx-api-keyfor OAuth
Gemini:
- Calls
GeminiProvider::has_cli_credentials()to detect CLI auth - Reuses existing CLI tokens from
~/.gemini/directory - Falls back to
GEMINI_API_KEYenv var
MiniMax:
- Supports OAuth tokens via
MINIMAX_OAUTH_TOKENenv var - Primary API key is
MINIMAX_API_KEY
Sources: src/onboard/wizard.rs:1719-1755, [src/providers/gemini.rs] (referenced)
The wizard implements a sophisticated model discovery system with caching and fallback logic.
sequenceDiagram
participant Wizard as setup_provider()
participant Cache as ModelCacheState
participant Fetch as fetch_live_models_for_provider()
participant API as Provider API
Wizard->>Cache: load_cached_models_for_provider(provider, TTL=12h)
alt Cache hit (within TTL)
Cache-->>Wizard: CachedModels { models, age_secs }
Wizard->>Wizard: Display "cached, updated Xh ago"
Wizard->>Wizard: Prompt "Refresh now?"
alt User declines refresh
Wizard->>Wizard: Use cached models
end
end
alt Cache miss OR user requests refresh
Wizard->>Fetch: fetch_live_models_for_provider(provider, api_key)
Fetch->>Fetch: models_endpoint_for_provider(provider)
Fetch->>API: GET /v1/models (with Bearer auth)
API-->>Fetch: JSON response
Fetch->>Fetch: parse_openai_compatible_model_ids(payload)
Fetch-->>Wizard: Vec<String> (model IDs)
alt Fetch succeeded
Wizard->>Cache: cache_live_models_for_provider(workspace, provider, models)
Cache->>Cache: Update ModelCacheState.entries
Cache->>Cache: Write to workspace/state/models_cache.json
Wizard->>Wizard: Display fetched models
else Fetch failed
Wizard->>Cache: load_any_cached_models_for_provider(provider)
alt Stale cache available
Cache-->>Wizard: Stale models (ignore TTL)
Wizard->>Wizard: Display "using stale cache"
else No cache
Wizard->>Wizard: Fall back to curated list
end
end
end
Sources: src/onboard/wizard.rs:1826-1927
Function: models_endpoint_for_provider() src/onboard/wizard.rs:882-908
Maps provider names to their OpenAI-compatible /v1/models endpoints:
match canonical_provider_name(provider_name) {
"openai" => Some("https://api.openai.com/v1/models"),
"venice" => Some("https://api.venice.ai/api/v1/models"),
"groq" => Some("https://api.groq.com/openai/v1/models"),
"glm" => Some("https://api.z.ai/api/paas/v4/models"),
"qwen" => Some("https://dashscope.aliyuncs.com/compatible-mode/v1/models"),
// ... 15+ more endpoints
}China Region Endpoints:
The wizard supports region-specific endpoints for compliance:
-
glm-cn→https://open.bigmodel.cn/api/paas/v4/models -
moonshot-cn→https://api.moonshot.cn/v1/models -
qwen-intl→https://dashscope-intl.aliyuncs.com/compatible-mode/v1/models
Sources: src/onboard/wizard.rs:882-908
Different providers use varying response schemas. The wizard implements provider-specific parsers:
OpenAI-compatible (majority):
fn parse_openai_compatible_model_ids(payload: &Value) -> Vec<String> {
// Handles both { "data": [...] } and direct array [...] formats
// Extracts model["id"] from each element
}Gemini:
fn parse_gemini_model_ids(payload: &Value) -> Vec<String> {
// Filters models by supportedGenerationMethods containing "generateContent"
// Strips "models/" prefix from model["name"]
}Ollama:
fn parse_ollama_model_ids(payload: &Value) -> Vec<String> {
// Extracts model["name"] from { "models": [...] }
// No filtering needed (local models are all usable)
}Sources: src/onboard/wizard.rs:929-990
The wizard stores cached model lists in JSON format at workspace/state/models_cache.json:
{
"entries": [
{
"provider": "openrouter",
"fetched_at_unix": 1704067200,
"models": [
"anthropic/claude-sonnet-4.6",
"openai/gpt-5.2",
"google/gemini-3-pro-preview",
"..."
]
},
{
"provider": "groq",
"fetched_at_unix": 1704070800,
"models": [
"llama-3.3-70b-versatile",
"mixtral-8x7b-instruct",
"..."
]
}
]
}Cache entries persist across wizard runs and are shared by the zeroclaw models refresh command.
Sources: src/onboard/wizard.rs:1158-1248
After Step 3, the wizard produces a ChannelsConfig struct src/config/schema.rs:1965-2004:
pub struct ChannelsConfig {
pub cli: bool, // Always true
pub telegram: Option<TelegramConfig>,
pub discord: Option<DiscordConfig>,
pub slack: Option<SlackConfig>,
pub mattermost: Option<MattermostConfig>,
pub webhook: Option<WebhookConfig>,
pub imessage: Option<IMessageConfig>,
pub matrix: Option<MatrixConfig>,
pub signal: Option<SignalConfig>,
pub whatsapp: Option<WhatsAppConfig>,
pub email: Option<EmailConfig>,
pub irc: Option<IrcConfig>,
pub lark: Option<LarkConfig>,
pub dingtalk: Option<DingTalkConfig>,
pub qq: Option<QQConfig>,
}Each Option<T> is None if the user skipped that channel, or Some(config) if configured.
Channels that support progressive message updates (e.g., Telegram) include streaming parameters:
pub struct TelegramConfig {
pub bot_token: String,
pub allowed_users: Vec<String>,
pub stream_mode: StreamMode, // Off | Partial
pub draft_update_interval_ms: u64, // Default: 1000ms
pub mention_only: bool, // Default: false
}
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum StreamMode {
#[default]
Off, // Send complete response only
Partial, // Update draft message during generation
}When stream_mode = Partial, the channel layer sends incremental updates to the messaging platform, creating a "typing animation" effect as the LLM generates tokens.
Sources: src/config/schema.rs:2006-2035
After successful wizard completion, the user is prompted to start channels immediately src/onboard/wizard.rs:164-193:
flowchart LR
WizardDone["Wizard completed"] --> HasChannels{"Has configured<br/>channels?"}
HasChannels -->|No| Return["Return Config"]
HasChannels -->|Yes| HasKey{"Has API key?"}
HasKey -->|No| Return
HasKey -->|Yes| Prompt["Prompt:<br/>'Launch channels now?'"]
Prompt -->|No| Return
Prompt -->|Yes| SetEnv["Set env var:<br/>ZEROCLAW_AUTOSTART_CHANNELS=1"]
SetEnv --> PrintMessage["Print 'Starting<br/>channel server...'"]
PrintMessage --> Return
When ZEROCLAW_AUTOSTART_CHANNELS=1 is detected, main.rs automatically calls start_channels(config) after the wizard returns, bypassing the need for a separate zeroclaw channel start command.
Sources: src/onboard/wizard.rs:164-193
The wizard is invoked via the zeroclaw onboard command with optional flags:
# Interactive wizard (9 steps)
zeroclaw onboard --interactive
# Quick setup (0 prompts)
zeroclaw onboard --api-key sk-... --provider openrouter --memory sqlite
# Channels repair (1 step)
zeroclaw onboard --channels-only
# Models refresh (updates cache)
zeroclaw models refresh --provider openrouter --forceFlag Precedence:
| Flag Combination | Mode | Entry Point |
|---|---|---|
--interactive |
Interactive wizard | run_wizard() |
--channels-only |
Channels repair | run_channels_repair_wizard() |
--api-key, --provider, --memory
|
Quick setup | run_quick_setup() |
| No flags | Quick setup (default) | run_quick_setup() |
Sources: [src/main.rs] (referenced for CLI parsing)
The onboarding wizard provides a comprehensive first-run experience that:
- Creates workspace structure with identity files and state directories
- Configures LLM provider with live model discovery and intelligent caching
- Sets up communication channels with platform-specific credential collection
- Establishes secure tunneling for remote access (optional)
- Configures tool integrations (Composio vs sovereign mode)
- Detects physical hardware and configures device drivers
- Selects memory backend with appropriate hygiene settings
-
Personalizes agent identity via
ProjectContextscaffolding - Generates complete configuration with encrypted secrets and subsystem defaults
The wizard's architecture balances ease-of-use (interactive mode) with automation (quick setup) while providing sophisticated features like model caching, OAuth detection, and hardware auto-discovery that make ZeroClaw production-ready in under 60 seconds.
Primary Sources:
- src/onboard/wizard.rs:1-3900 - Complete wizard implementation
- src/config/schema.rs:48-2200 - Configuration struct definitions
- src/channels/mod.rs:888-1103 - Workspace file scaffolding and system prompt construction