Your AI conversations compile themselves into a searchable knowledge base.
Adapted from Karpathy's LLM Knowledge Base architecture, but instead of clipping web articles, the raw data is your own conversations with Claude Code. When a session ends (or auto-compacts mid-session), Claude Code hooks capture the conversation transcript and spawn a background process that uses the Claude Agent SDK to extract the important stuff - decisions, lessons learned, patterns, gotchas - and appends it to a daily log. You then compile those daily logs into structured, cross-referenced knowledge articles organized by concept. Retrieval uses a simple index file instead of RAG - no vector database, no embeddings, just markdown.
Anthropic has clarified that personal use of the Claude Agent SDK is covered under your existing Claude subscription (Max, Team, or Enterprise) - no separate API credits needed. Unlike OpenClaw, which requires API billing for its memory flush, this runs on your subscription.
Tell your AI coding agent:
"Clone https://github.com/coleam00/claude-memory-compiler into this project. Set up the Claude Code hooks so my conversations automatically get captured into daily logs, compiled into a knowledge base, and injected back into future sessions. Read the AGENTS.md for the full technical reference on how everything works."
The agent will:
- Clone the repo and run
uv syncto install dependencies - Copy
.claude/settings.jsoninto your project (or merge the hooks into your existing settings) - The hooks activate automatically next time you open Claude Code
From there, your conversations start accumulating. After 6 PM local time, the next session flush automatically triggers compilation of that day's logs into knowledge articles. You can also run uv run python scripts/compile.py manually at any time.
Conversation -> SessionEnd/PreCompact hooks -> flush.py extracts knowledge
-> daily/YYYY-MM-DD.md -> compile.py -> knowledge/concepts/, connections/, qa/
-> SessionStart hook injects index into next session -> cycle repeats
- Hooks capture conversations automatically (session end + pre-compaction safety net)
- flush.py calls the Claude Agent SDK to decide what's worth saving, and after 6 PM triggers end-of-day compilation automatically
- compile.py turns daily logs into organized concept articles with cross-references (triggered automatically or run manually)
- query.py answers questions using index-guided retrieval (no RAG needed at personal scale)
- lint.py runs 7 health checks (broken links, orphans, contradictions, staleness)
uv run python scripts/compile.py # compile new daily logs
uv run python scripts/query.py "question" # ask the knowledge base
uv run python scripts/query.py "question" --file-back # ask + save answer back
uv run python scripts/lint.py # run health checks
uv run python scripts/lint.py --structural-only # free structural checks onlyKarpathy's insight: at personal scale (50-500 articles), the LLM reading a structured index.md outperforms vector similarity. The LLM understands what you're really asking; cosine similarity just finds similar words. RAG becomes necessary at ~2,000+ articles when the index exceeds the context window.
See AGENTS.md for the complete technical reference: article formats, hook architecture, script internals, cross-platform details, costs, and customization options. AGENTS.md is designed to give an AI agent everything it needs to understand, modify, or rebuild the system.