-
Notifications
You must be signed in to change notification settings - Fork 4.4k
01 Overview
Relevant source files
The following files were used as context for generating this wiki page:
This document introduces ZeroClaw: its purpose, design philosophy, and high-level architecture. It explains what ZeroClaw is, why it exists, and how its major components fit together. For installation instructions and initial setup, see Getting Started. For detailed subsystem documentation, see Core Architecture and subsequent sections.
ZeroClaw is a lightweight, trait-driven AI agent runtime written in pure Rust. It provides a pluggable infrastructure for building autonomous AI assistants that can:
- Execute tools (shell commands, file operations, browser automation, hardware control)
- Communicate across multiple channels (Telegram, Discord, Slack, Email, etc.)
- Interact with 28+ LLM providers through a unified interface
- Maintain conversation memory with hybrid vector+keyword search
- Run on resource-constrained hardware (as low as 5MB RAM)
The runtime is designed for zero lock-in: every major subsystem is a trait, allowing implementations to be swapped via configuration changes without code modifications.
Key Characteristics:
| Attribute | Description |
|---|---|
| Language | 100% Rust (zero JavaScript/Python runtime overhead) |
| Binary Size | 3.4 MB - 8.8 MB (release build, stripped) |
| Memory Footprint | < 5 MB for CLI operations |
| Cold Start | < 10ms for --help, < 20ms for status
|
| Hardware Support | ARM, x86, RISC-V; runs on $10 SBCs |
| Deployment | Single binary or Docker container |
Sources: README.md:1-98, Cargo.toml:1-189
ZeroClaw's architecture follows three core principles:
Every major subsystem implements a Rust trait, enabling configuration-driven behavior changes:
// Core traits (conceptual - actual definitions in codebase)
pub trait Provider { async fn chat(...) -> Result<String>; }
pub trait Channel { async fn listen(...) -> Result<()>; }
pub trait Tool { async fn execute(...) -> Result<ToolResult>; }
pub trait Memory { async fn store(...) -> Result<()>; }
pub trait RuntimeAdapter { async fn execute(...) -> Result<Output>; }This allows swapping between:
- Providers: OpenAI ↔ Anthropic ↔ Ollama (28+ options)
- Channels: Telegram ↔ Discord ↔ CLI (13+ options)
- Memory: SQLite ↔ PostgreSQL ↔ Markdown (5+ options)
- Runtime: Native ↔ Docker sandboxed execution
Sources: README.md:301-321, src/channels/mod.rs:38-41
ZeroClaw enforces security at five layers:
| Layer | Implementation | Config Location |
|---|---|---|
| Network | Binds 127.0.0.1 by default, refuses 0.0.0.0 without tunnel |
gateway.host, gateway.allow_public_bind
|
| Authentication | 6-digit pairing code, bearer tokens (SHA-256 hashed) |
gateway.require_pairing, gateway.paired_tokens
|
| Authorization | AutonomyLevel (ReadOnly/Supervised/Full), allowlists |
autonomy.level, channels_config.*.allowed_users
|
| Isolation | Workspace scoping, 14 forbidden system dirs, Docker sandboxing |
autonomy.workspace_only, runtime.kind
|
| Data Protection | ChaCha20-Poly1305 secret encryption | secrets.encrypt |
Default Configuration (Secure):
- Gateway: localhost-only, pairing required
- Autonomy:
supervised(requires approval for destructive actions) - Filesystem: workspace-scoped, system directories blocked
- Channels: empty allowlist = deny all
Sources: README.md:380-403, src/config/schema.rs:502-597
Optimized for resource-constrained environments:
- No runtime dependencies: No Node.js, Python, or JVM required
- Fast cold starts: Command dispatch in < 20ms
- Low memory usage: CLI operations use < 5MB RAM
-
Efficient compilation:
codegen-units=1profile for 1GB RAM devices
The release profile uses aggressive size optimization:
[profile.release]
opt-level = "z" # Optimize for size
lto = "thin" # Thin link-time optimization
codegen-units = 1 # Serial codegen (low-memory devices)
strip = true # Remove debug symbols
panic = "abort" # Reduce binary sizeSources: Cargo.toml:161-181, README.md:63-98
graph TB
User["User"]
subgraph "Entry Points"
CLI["CLI Command<br/>(zeroclaw agent/gateway/daemon)"]
Gateway["HTTP Gateway<br/>(127.0.0.1:3000)"]
Channels["Channel Listeners<br/>(Telegram, Discord, etc.)"]
end
subgraph "Core Runtime"
Config["Config::load_or_init()<br/>~/.zeroclaw/config.toml"]
Agent["Agent Core<br/>run_tool_call_loop()"]
Security["SecurityPolicy<br/>AutonomyLevel"]
end
subgraph "Subsystems"
Provider["Box<dyn Provider><br/>create_resilient_provider()"]
Memory["Arc<dyn Memory><br/>SQLite/Postgres/Lucid"]
Tools["Vec<Box<dyn Tool>><br/>shell/file/git/browser/etc"]
Runtime["Box<dyn RuntimeAdapter><br/>Native/Docker"]
end
User --> CLI
User --> Gateway
User --> Channels
CLI --> Config
Gateway --> Config
Channels --> Config
Config --> Agent
Config --> Security
Config --> Provider
Config --> Memory
Config --> Tools
Config --> Runtime
Agent --> Provider
Agent --> Tools
Agent --> Memory
Security -.enforces.-> Agent
Tools --> Runtime
Key Code Entities:
| Component | Primary Location | Key Type/Function |
|---|---|---|
| CLI Entry | src/main.rs |
main(), Commands enum |
| Config Loading | src/config/schema.rs:48-144 |
Config struct, load_or_init()
|
| Agent Loop | src/agent/loop_.rs | run_tool_call_loop() |
| Provider Factory | src/providers/mod.rs | create_resilient_provider() |
| Channel Dispatcher | src/channels/mod.rs:816-844 | run_message_dispatch_loop() |
| Gateway Server | src/gateway/mod.rs |
axum::Router, /pair, /webhook
|
Sources: src/channels/mod.rs:33-51, src/config/schema.rs:48-144, README.md:301-321
The Config struct orchestrates all subsystems:
classDiagram
class Config {
+workspace_dir: PathBuf
+config_path: PathBuf
+api_key: Option~String~
+default_provider: Option~String~
+default_model: Option~String~
+channels_config: ChannelsConfig
+memory: MemoryConfig
+gateway: GatewayConfig
+autonomy: AutonomyConfig
+runtime: RuntimeConfig
+secrets: SecretsConfig
+browser: BrowserConfig
+composio: ComposioConfig
+load_or_init() Result~Config~
+save() Result
}
class ChannelsConfig {
+telegram: Option~TelegramConfig~
+discord: Option~DiscordConfig~
+slack: Option~SlackConfig~
+email: Option~EmailChannel~
}
class MemoryConfig {
+backend: String
+auto_save: bool
+embedding_provider: String
+vector_weight: f64
+keyword_weight: f64
}
class GatewayConfig {
+port: u16
+host: String
+require_pairing: bool
+allow_public_bind: bool
+paired_tokens: Vec~String~
}
class AutonomyConfig {
+level: AutonomyLevel
+workspace_only: bool
+allowed_commands: Vec~String~
+forbidden_paths: Vec~String~
}
class RuntimeConfig {
+kind: String
+docker: DockerConfig
}
Config --> ChannelsConfig
Config --> MemoryConfig
Config --> GatewayConfig
Config --> AutonomyConfig
Config --> RuntimeConfig
Configuration File Location: ~/.zeroclaw/config.toml
Priority Order: Environment variables > config.toml > Built-in defaults
Sources: src/config/schema.rs:48-144, README.md:492-599
This diagram shows how a message from a channel reaches the agent core and executes tools:
sequenceDiagram
participant Channel as "TelegramChannel<br/>(or Discord/Slack)"
participant Dispatcher as "run_message_dispatch_loop()<br/>channels/mod.rs:816"
participant Agent as "run_tool_call_loop()<br/>agent/loop_.rs"
participant Provider as "ReliableProvider<br/>providers/reliable.rs"
participant Tools as "Tool::execute()<br/>tools/*"
participant Memory as "Memory::store()<br/>memory/*"
Channel->>Dispatcher: "ChannelMessage<br/>{sender, content, channel}"
Dispatcher->>Dispatcher: "Acquire semaphore<br/>(max_in_flight_messages)"
Dispatcher->>Agent: "process_channel_message()"
Agent->>Memory: "recall(user_msg, 5, None)"
Memory-->>Agent: "Vec<MemoryEntry>"
Agent->>Agent: "Build ChatMessage history"
loop Tool Call Loop (max_tool_iterations)
Agent->>Provider: "chat(messages, tools)"
Provider-->>Agent: "Response + tool_calls"
alt Has tool_calls
loop For each tool_call
Agent->>Tools: "execute(name, args)"
Tools-->>Agent: "ToolResult"
end
Agent->>Agent: "Append results to history"
else Text only
Agent->>Agent: "Break loop"
end
end
Agent->>Memory: "store(key, content, Conversation)"
Agent->>Channel: "send(SendMessage)"
Key Functions:
-
process_channel_message()src/channels/mod.rs:556-814 -
run_tool_call_loop()src/agent/loop_.rs -
build_memory_context()src/channels/mod.rs:443-469 - Tool execution with security checks src/channels/mod.rs:33-51
Sources: src/channels/mod.rs:556-814, README.md:301-321
graph TB
Factory["create_resilient_provider()<br/>providers/mod.rs"]
Router["RouterProvider<br/>Model-specific routing"]
Reliable["ReliableProvider<br/>Retries + Fallbacks<br/>providers/reliable.rs"]
subgraph "Provider Implementations"
OpenAI["OpenAiProvider"]
Anthropic["AnthropicProvider"]
OpenRouter["OpenRouterProvider"]
Ollama["OllamaProvider"]
Compatible["OpenAiCompatibleProvider<br/>(20+ APIs)"]
end
Factory --> Router
Router --> Reliable
Reliable --> OpenAI
Reliable --> Anthropic
Reliable --> OpenRouter
Reliable --> Ollama
Reliable --> Compatible
Reliable -.retries.-> Reliable
Reliable -.key rotation.-> Reliable
Reliable -.model fallback.-> Reliable
Provider Factory Flow:
-
create_resilient_provider()readsdefault_providerfrom config - Wraps base provider with
ReliableProvider(retries, exponential backoff) - Optional
RouterProviderfor hint-based routing (e.g.,hint:code-heavy→ Codex) - Credential resolution: explicit param → provider env var → generic env var
Built-in Providers (28+):
- Direct API:
anthropic,openai,gemini,mistral,deepseek,xai,groq - Gateways:
openrouter,venice,astrai,nvidia - Local:
ollama - China providers:
moonshot,glm,zai,qwen,minimax - Custom:
custom:https://your-api.com,anthropic-custom:https://your-api.com
Sources: src/providers/mod.rs, src/providers/reliable.rs, README.md:308-310
Tools are discovered at runtime and passed to the agent loop:
| Category | Examples | Tool Trait Implementation |
|---|---|---|
| Core |
shell, file_read, file_write, memory_store, memory_recall
|
src/tools/shell.rs, src/tools/file.rs, src/tools/memory.rs |
| Scheduling |
cron_add, cron_list, cron_remove
|
src/tools/cron.rs |
| Version Control |
git_status, git_commit, git_push
|
src/tools/git.rs |
| Browser |
browser_open, browser_click, browser_screenshot
|
src/tools/browser.rs |
| HTTP |
http_request, web_search
|
src/tools/http_request.rs, src/tools/web_search.rs |
| Integration |
composio_execute (200+ apps), delegate (sub-agents) |
src/tools/composio.rs, src/tools/delegate.rs |
| Hardware |
gpio_read, gpio_write, arduino_upload, hardware_memory_read
|
src/tools/gpio.rs, src/tools/arduino.rs |
Tool Execution Flow:
- Agent calls
Tool::execute(name, args)src/tools/mod.rs - Security policy checks
can_act()src/security/mod.rs - RuntimeAdapter executes (native subprocess or Docker container) src/runtime/mod.rs
- Result formatted and appended to conversation history
Sources: src/tools/mod.rs, README.md:308-321
All channels implement the Channel trait:
// Trait definition (conceptual)
pub trait Channel: Send + Sync {
fn name(&self) -> &str;
async fn listen(&self, tx: mpsc::Sender<ChannelMessage>) -> Result<()>;
async fn send(&self, msg: &SendMessage) -> Result<()>;
}Available Channels (13+):
| Channel | Config Section | Allowlist Field | Transport |
|---|---|---|---|
| CLI | (always available) | N/A | stdin/stdout |
| Telegram | [channels_config.telegram] |
allowed_users |
Bot API polling |
| Discord | [channels_config.discord] |
allowed_users |
WebSocket Gateway |
| Slack | [channels_config.slack] |
allowed_users |
HTTP polling |
| Matrix | [channels_config.matrix] |
allowed_rooms |
E2EE sync |
[channels_config.email] |
allowed_senders |
IMAP IDLE + SMTP | |
[channels_config.whatsapp] |
allowed_numbers |
Cloud API webhook | |
| Mattermost | [channels_config.mattermost] |
allowed_users |
REST API polling |
| Lark/Feishu | [channels_config.lark] |
allowed_users |
WebSocket + ProtoBuf |
Channel Supervision:
- Each channel runs in a separate task via
spawn_supervised_listener()src/channels/mod.rs:471-509 - Automatic restart with exponential backoff on failure
- Health status tracked in
crate::healthmodule
Sources: src/channels/mod.rs:1-31, src/channels/traits.rs, README.md:311
The Memory trait supports multiple persistence strategies:
| Backend | Storage | Search | Use Case |
|---|---|---|---|
sqlite |
Local SQLite DB | Hybrid (FTS5 + vector) | Default, full-featured |
postgres |
PostgreSQL | SQL-based | Multi-instance deployments |
lucid |
External lucid CLI |
Lucid-managed | Integration with existing Lucid setups |
markdown |
Workspace .md files |
File-based recall | Human-readable, Git-friendly |
none |
No-op | N/A | Stateless agents |
SQLite Hybrid Search Architecture:
- FTS5 virtual table for keyword search (BM25 scoring)
- BLOB embeddings for vector similarity (cosine distance)
- Custom weighted merge function combines both scores
- Embedding cache table with LRU eviction
Configuration Example:
[memory]
backend = "sqlite"
auto_save = true
embedding_provider = "openai" # or "none"
vector_weight = 0.7
keyword_weight = 0.3Sources: src/memory/mod.rs, src/memory/sqlite.rs, README.md:330-377
| Layer | Technology | Justification |
|---|---|---|
| Language | Rust 2021 edition | Memory safety, zero-cost abstractions, small binaries |
| Async Runtime | Tokio (feature-optimized) | Mature, production-tested, minimal footprint |
| HTTP Client |
reqwest (rustls) |
TLS without OpenSSL system dependency |
| HTTP Server | axum |
Type-safe routing, minimal overhead |
| Serialization |
serde + serde_json
|
De facto standard, battle-tested |
| Database |
rusqlite (bundled) |
Zero external dependencies, cross-platform |
| Encryption | ChaCha20-Poly1305 (chacha20poly1305 crate) |
AEAD cipher for secret storage |
| WebSocket |
tokio-tungstenite (rustls) |
Discord Gateway, Lark long-connection |
| Observability |
tracing + prometheus
|
Structured logging, metrics export |
Key Dependencies:
-
clapfor CLI argument parsing Cargo.toml:19 -
dialoguerfor interactive prompts Cargo.toml:94 -
matrix-sdkfor Matrix E2EE support Cargo.toml:29 -
fantoccini(optional) for Rust-native browser automation Cargo.toml:54
Sources: Cargo.toml:1-189, README.md:161-165
flowchart TD
Start([Program Start]) --> ParseArgs["Parse CLI Args<br/>(clap::Parser)"]
ParseArgs --> CheckCmd{Command Type}
CheckCmd -->|onboard| Onboard["run_wizard() or<br/>run_quick_setup()<br/>onboard/wizard.rs"]
CheckCmd -->|agent/gateway/daemon| LoadConfig["Config::load_or_init()<br/>config/schema.rs"]
Onboard --> SaveConfig["config.save()<br/>~/.zeroclaw/config.toml"]
LoadConfig --> CheckFile{config.toml<br/>exists?}
CheckFile -->|No| InitDefaults["Create with<br/>default values"]
CheckFile -->|Yes| ParseTOML["Parse TOML<br/>toml::from_str()"]
ParseTOML --> CheckEncrypt{secrets.encrypt?}
CheckEncrypt -->|Yes| Decrypt["SecretStore::decrypt()<br/>ChaCha20Poly1305<br/>security/secret_store.rs"]
CheckEncrypt -->|No| ApplyEnv
Decrypt --> ApplyEnv["Apply Env Overrides<br/>OPENROUTER_API_KEY, etc."]
InitDefaults --> ApplyEnv
ApplyEnv --> InitSystems["Initialize Subsystems"]
InitSystems --> CreateProvider["create_resilient_provider()<br/>providers/mod.rs"]
InitSystems --> CreateMemory["Memory backend<br/>memory/mod.rs"]
InitSystems --> CreateTools["Build tool registry<br/>tools/mod.rs"]
InitSystems --> CreateSecurity["SecurityPolicy::new()<br/>security/mod.rs"]
SaveConfig --> RunAgent["Execute Command"]
CreateProvider --> RunAgent
CreateMemory --> RunAgent
CreateTools --> RunAgent
CreateSecurity --> RunAgent
Configuration Priority (Highest to Lowest):
- Environment variables (e.g.,
OPENROUTER_API_KEY,OLLAMA_API_KEY) -
~/.zeroclaw/config.toml(user-edited values) - Built-in defaults (defined in
impl Default for Config)
Workspace Selection:
- Default:
~/.zeroclaw/workspace - Override:
ZEROCLAW_WORKSPACE=/path/to/workspace - Active marker:
~/.zeroclaw/active_workspace.toml(multi-profile support)
Sources: src/config/schema.rs:48-144, src/onboard/wizard.rs:61-196, src/security/secret_store.rs
This overview has introduced ZeroClaw's purpose, design philosophy, and high-level architecture. For hands-on setup:
- Installation & Setup: See Getting Started for installation instructions and the onboarding wizard
- Configuration Details: See Configuration for complete config.toml reference
- Architecture Deep-Dive: See Core Architecture for subsystem internals
- Security Model: See Security Model for the five-layer defense architecture
To start using ZeroClaw immediately:
# Quick setup (generates default config)
zeroclaw onboard --api-key sk-... --provider openrouter
# Interactive wizard
zeroclaw onboard --interactive
# First agent interaction
zeroclaw agent -m "Hello, ZeroClaw!"Sources: README.md:169-252, src/onboard/wizard.rs:61-196