Vision: An autonomous agent that transforms natural language into live automations.
"Tell it what to automate. It builds the workflow."
yagr ← THIS PROJECT (agent + product layer)
│ "Tell it what to automate."
│
├── V1 backend: n8n ← n8n instance + n8n-as-code packages as bridge
│ Requires: n8n instance + @yagr/skills + @yagr/transformer + @yagr/cli
│ Ships first. Proven. 537 nodes.
│
└── V2 backend: yagr-engine ← REPLACES n8n (not a complement)
Same integrations (Slack, Sheets, Twilio...) but:
code-first (@node/@links/>>), AI-native (LibCST), self-contained.
yagr-engine = n8n + n8n-as-code fused into one thing.
yagr is the product — what users interact with.
yagr-engine replaces n8n — same job, AI-native architecture.
The two are connected through an Engine interface so migrating from n8n → yagr-engine is a config change, not a rewrite.
Implementing the agent itself as a monolithic n8n workflow is a trap:
- Static by nature: The agent loop is a fixed sequence of nodes. Adding a reasoning step means rewiring JSON by hand.
- No real control flow: n8n's switch/if nodes don't give you graph cycles, backtracking, or conditional tool retry with state.
- No type safety: Everything is JS strings in Code Nodes. No compile-time guarantees.
- Debugging is blind: n8n's execution log shows node outputs, not agent reasoning traces.
- Vendor lock-in: The agent logic IS n8n. You can't run it headless, test it in CI, or swap the runtime.
Our agent runs as a TypeScript program that uses an execution engine for automations — not as its own brain. Today that engine is n8n. Tomorrow it's yagr-engine.
A yagr is something that is simultaneously a whole in itself and a part of a larger system.
Each node is a yagr: self-contained (it does one thing well), but composable (it connects to others to form workflows). A workflow is itself a yagr: a complete automation, but also a building block in a larger system.
Our agent doesn't reinvent tools. It doesn't write custom HTTP calls for Slack or build ad-hoc integrations. It composes existing yagrs (nodes) into new wholes (workflows). The agent's differentiator is that its tool palette is the entire node ecosystem — grounded in validated schemas.
- V1 (n8n): 537 n8n nodes, typed via the ontology in
@yagr/skills. Requires n8n + a bridge layer (n8n-as-code packages). - V2 (yagr-engine): Same integrations (Slack, Sheets, Twilio...) reimplemented as
@node(type="slack.message")library nodes. Code-first, AI-native, self-contained. No n8n instance needed.
The user sees no difference between V1 and V2 — same automations, same capabilities. The difference is under the hood: yagr-engine is n8n + n8n-as-code fused into a single runtime designed for AI from day one.
This is profoundly different from an approach where the agent builds everything from generic HTTP calls, ignoring that purpose-built nodes already exist.
Yagr should stay focused on one job: turn intent into automation.
That means we do not expand V1 into a generic assistant with reminders, notes, or chat-memory features as primary product surfaces. Those are distractions unless they directly serve workflow creation, inspection, evolution, or operation.
The key insight is that generated workflows are already durable memory:
- A workflow is a persisted interpretation of the user's intent
- Its topology is memory of how the problem was solved
- Its configuration is memory of what matters operationally
- Its execution history is memory of what happened over time
So Yagr does not need a separate "memory product" to be useful. The workflows themselves are executable memories.
This creates the recursive loop:
User intent
→ Yagr generates workflow
→ workflow persists intent as executable structure
→ Yagr can inspect, explain, modify, and extend that workflow later
→ the generated artifact becomes part of Yagr's future context
In other words: Yagr creates automations, and those automations become the long-term memory Yagr can talk to, reason about, and evolve.
┌──────────────────────────────────────────────────────┐
│ Gateway Layer │
│ (Telegram, Web UI, CLI, API) │
│ Simple I/O. Stateless message routing. │
├──────────────────────────────────────────────────────┤
│ Agent Layer │
│ (Reasoning, planning, tool selection) │
│ Stateful graph. Multi-step. Interruptible. │
├──────────────────────────────────────────────────────┤
│ Engine Interface │
│ Abstract contract: listNodes, generateWorkflow, │
│ validate, deploy, listWorkflows, manageWorkflow │
├────────────────────────┬─────────────────────────────┤
│ N8nEngine (V1) │ YagrEngine (V2) │
│ Skills + Transformer │ yagr-engine Python core │
│ + CLI sync │ + native runner │
│ Bridge to n8n │ Self-contained (IS the │
│ instance │ runtime — no n8n needed) │
├────────────────────────┴─────────────────────────────┤
│ Execution Runtime │
│ V1: n8n instance │ V2: yagr-engine runner │
│ (external process) │ (same integrations, │
│ │ AI-native architecture) │
└──────────────────────────────────────────────────────┘
Each layer has a single responsibility. The agent never talks to the runtime directly — it goes through the Engine interface.
| Framework | Stars | TypeScript | Agent loops | State machine | MCP support | Maturity |
|---|---|---|---|---|---|---|
| Vercel AI SDK | 22.6K | Native | ToolLoopAgent |
No (linear) | Partial | Production |
| LangGraph JS | 2.6K | Native | Full graph cycles | Yes (StateGraph) | Via tools | Production |
| Mastra | 22K | Native | Yes + workflows | .then()/.branch() |
Native server | Growing fast |
Why Vercel AI SDK over LangGraph or Mastra:
-
Lightweight and composable — It's a toolkit, not a framework. It doesn't impose an architecture, agent lifecycle, or deployment model. We compose what we need.
-
Provider-agnostic from day 1 —
model: 'anthropic/claude-sonnet-4'ormodel: 'openai/gpt-5'. One interface, swap providers. Users aren't locked into one LLM. -
Structured outputs are first-class —
Output.object({ schema: z.object({...}) })is exactly what we need for generating validated workflow specifications. -
Streaming is production-grade — Token-by-token streaming, tool call streaming, partial results. Essential for a good UX when the agent is reasoning.
-
No baggage — LangGraph brings the LangChain ecosystem (heavy, opinionated). Mastra brings its own server, storage, deployers, workflows engine (we already have ours). Vercel AI SDK brings... function calls. That's it.
-
22.6K stars, 92K dependents, 704 contributors — Battle-tested. Not going anywhere.
What LangGraph would give us that we DON'T need:
- StateGraph with cycles → Our agent loop is simple: reason → plan → generate → validate → deploy. A
whileloop with tool calls handles this. - Checkpointing/persistence → We build this ourselves (simpler, in our DB schema).
- LangGraph Platform → We don't want their cloud, we're self-hosted.
What Mastra would give us that we DON'T need:
- Its own workflow engine → We already have n8n for that.
- Its own agent server with API routes → We have our own gateway.
- Its own RAG and memory system → We build this to our needs.
- Enterprise licensing complexity → We want clean Apache 2.0.
| From | What we adopt | How |
|---|---|---|
| Vercel AI SDK | generateText, ToolLoopAgent, structured outputs, streaming, provider abstraction |
Direct dependency (ai, @ai-sdk/anthropic, @ai-sdk/openai) |
| LangGraph | The concept of agent-as-graph with state transitions | Inspiration. We implement our own lightweight state machine for the planning/execution loop |
| Mastra | The concept of MCP server authoring | We already expose npx n8nac skills mcp. We keep our own implementation |
| Existing plugin work | The pattern of gateway → plugin → tool → context injection | This repo already implements that pattern. The agent layer sits between gateway and tools |
packages/yagr/
├── src/
│ ├── index.ts # Public API exports
│ ├── agent.ts # Core YagrAgent class
│ ├── engine/
│ │ ├── engine.ts # Engine interface (abstract contract)
│ │ ├── n8n-engine.ts # V1: n8n adapter (Skills + Transformer + CLI)
│ │ └── yagr-engine.ts # V2: yagr-engine adapter (stub, future)
│ ├── tools/ # Vercel AI SDK tool definitions
│ │ ├── search-nodes.ts # engine.searchNodes()
│ │ ├── node-info.ts # engine.nodeInfo()
│ │ ├── search-templates.ts # engine.searchTemplates()
│ │ ├── generate-workflow.ts # engine.generateWorkflow()
│ │ ├── validate.ts # engine.validate()
│ │ ├── deploy.ts # engine.deploy()
│ │ ├── list-workflows.ts # engine.listWorkflows()
│ │ └── manage-workflow.ts # engine.activate/deactivate/delete
│ ├── memory/
│ │ ├── conversation.ts # Chat history (simple, in-process)
│ │ └── workflow-registry.ts # Tracks what the agent has deployed
│ └── gateway/
│ ├── telegram.ts # Telegram Bot API adapter
│ ├── web.ts # Simple HTTP/WebSocket gateway
│ └── types.ts # Gateway interface (input/output contract)
├── package.json
├── tsconfig.json
└── BLUEPRINT.md # This file
The central abstraction that makes the backend swappable:
interface Engine {
// Knowledge
searchNodes(query: string): Promise<NodeSummary[]>;
nodeInfo(type: string): Promise<NodeSchema>;
searchTemplates(query: string): Promise<Template[]>;
// Generation
generateWorkflow(spec: WorkflowSpec): Promise<GeneratedWorkflow>;
validate(workflow: GeneratedWorkflow): Promise<ValidationResult>;
// Deployment
deploy(workflow: GeneratedWorkflow): Promise<DeployedWorkflow>;
listWorkflows(): Promise<DeployedWorkflow[]>;
activateWorkflow(id: string): Promise<void>;
deactivateWorkflow(id: string): Promise<void>;
deleteWorkflow(id: string): Promise<void>;
}V1 should not invent a new configuration model. It should inherit the current n8n-as-code operating model:
- local workspace config in
n8nac-config.json - global per-host API key store
- one active backend instance per workspace at a time
interface N8nEngineConfig {
host: string;
apiKey: string;
syncFolder: string;
projectId: string;
projectName: string;
instanceIdentifier?: string;
}Resolution order for V1 should match the current implementation philosophy:
- workspace-local config (
n8nac-config.json) - stored API key for the configured host
- editor settings / environment fallback when relevant
This avoids a second setup story and keeps Yagr aligned with the current n8n-as-code UX.
V1 — N8nEngine implements Engine:
searchNodes→@yagr/skillsKnowledgeSearchnodeInfo→@yagr/skillsNodeSchemaProvidersearchTemplates→@yagr/skillstemplate indexgenerateWorkflow→@yagr/transformerAST → TypeScript → JSONvalidate→@yagr/skillsWorkflowValidatordeploy→@yagr/clisync engine (POST to n8n API)
V2 — YagrNativeEngine implements Engine:
searchNodes→ yagr-engine's node registry (same integrations: Slack, Sheets, Twilio...)generateWorkflow→ produce*.yagr.pyfiles with@node/@linksDSLdeploy→ yagr-engine's native runner (no n8n instance needed)- Same interface, same integrations, AI-native architecture. Agent code doesn't change.
import { generateText, tool } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { N8nEngine } from './engine/n8n-engine';
// Engine is injected — swap N8nEngine for YagrNativeEngine in V2
const engine = new N8nEngine({ skills, transformer, cli });
const result = await generateText({
model: anthropic('claude-sonnet-4'),
system: buildSystemPrompt(engine), // Engine provides available nodes context
tools: buildTools(engine), // All tools delegate to engine interface
maxSteps: 15,
messages: conversation.getHistory(chatId),
});The agent's reasoning loop:
- Understand: Parse user intent from natural language
- Search: Find relevant n8n nodes via the ontology (
searchNodes,nodeInfo) - Plan: Decide which nodes to compose and how to connect them
- Generate: Produce a TypeScript workflow using the Transformer
- Validate: Check the workflow against n8n schemas
- Deploy: Push to n8n instance and activate
- Confirm: Report back to the user with what was created
Each tool delegates to the Engine interface:
// Example: searchNodes tool — engine-agnostic
const searchNodes = (engine: Engine) => tool({
description: 'Search for nodes that match a capability. ' +
'Use this to find the right node for an automation task.',
parameters: z.object({
query: z.string().describe('What the node should do, e.g. "send slack message"'),
}),
execute: async ({ query }) => {
const results = await engine.searchNodes(query);
return results.map(r => ({
name: r.name,
type: r.type,
description: r.description,
category: r.category,
}));
},
});
// Example: generateWorkflow tool — engine-agnostic
const generateWorkflow = (engine: Engine) => tool({
description: 'Generate a validated workflow from a specification. ' +
'Call this after you have identified the right nodes and their configuration.',
parameters: z.object({
name: z.string(),
nodes: z.array(z.object({
name: z.string(),
type: z.string(),
parameters: z.record(z.unknown()),
})),
connections: z.array(z.object({
from: z.string(),
to: z.string(),
})),
}),
execute: async (spec) => {
const workflow = await engine.generateWorkflow(spec);
const validation = await engine.validate(workflow);
return { workflow, validation };
// V1: generates n8n JSON via Transformer
// V2: generates *.yagr.py via yagr-engine DSL
},
});User intent
│
▼
┌─────────────────────┐
│ YagrAgent │ "I need a Slack trigger, an IF node, and a Twilio node"
│ (AI SDK + Tools) │
└─────────┬───────────┘
│ WorkflowSpec (Zod schema)
▼
┌─────────────────────┐
│ Engine interface │ engine.generateWorkflow(spec)
│ │ engine.validate(workflow)
│ │ engine.deploy(workflow)
├─────────┬───────────┤
│ V1: N8nEngine │ V2: YagrNativeEngine
│ │
│ Skills → search │ Registry → search
│ Transformer → gen │ DSL codegen → *.yagr.py
│ Validator → check │ LibCST → validate
│ CLI sync → deploy │ Runner → deploy
└─────────────────────┘
The agent doesn't know which engine runs underneath. Same tools, same reasoning, different backend.
The gateway is a thin adapter that converts external messages to a standard format and routes them to the agent. It does NOT contain business logic.
// Gateway contract — every adapter implements this
interface Gateway {
/** Start listening for messages */
start(): Promise<void>;
/** Stop listening */
stop(): Promise<void>;
/** Send a message back to the user */
reply(chatId: string, message: string): Promise<void>;
}
// Message format — all gateways normalize to this
interface InboundMessage {
chatId: string;
userId: string;
text: string;
source: 'telegram' | 'web' | 'cli' | 'api';
metadata?: Record<string, unknown>;
}The existing plugin architecture in this repository follows this shape:
User → Chat UI → Gateway → Plugin System → Agent (LLM)
│
├── before_prompt_build hook (context injection)
├── registerTool (tool registration)
├── registerCli (CLI commands)
└── registerService (background services)
What's good:
- Plugin-based context injection (
before_prompt_build): The right knowledge is injected at the right time - Tool abstraction: Tools are self-describing (schema + execute function)
- Clean separation between gateway (message transport) and plugins (capabilities)
What we take:
- The pattern of context injection per conversation (we load the relevant ontology subset)
- The tool → CLI passthrough pattern (our tools call
n8naccommands under the hood)
What we DON'T take:
- Any dependency on a third-party plugin SDK — we use Vercel AI SDK tools natively
- Their gateway implementation — we build our own thin adapters
| Gateway | Priority | Notes |
|---|---|---|
| CLI | P0 | Interactive terminal. Essential for testing and power users |
| HTTP API | P0 | REST + SSE/WebSocket. Foundation for all web UIs |
| Telegram | P1 | Widest reach for consumer users |
| Web UI | P2 | Hosted chat widget. Separate frontend package later |
| Dimension | Generic workflow-native agent | Yagr |
|---|---|---|
| Brain | n8n workflow (static, monolithic) | TypeScript program (dynamic, composable) |
| Knowledge | None. Claude improvises | Full node ontology (537 n8n nodes today, yagr-engine nodes tomorrow) |
| Tool creation | Code Node + helpers.httpRequest() |
Real typed nodes (Slack, Google Sheets, Twilio...) |
| Validation | Test after deploy, retry if fails | Validate before deploy, correct at generation time |
| Scope | Chat assistant + task manager + memory | Focused: text → automation. Does one thing extremely well |
| Runtime | Depends on an orchestration stack around n8n | V1: Node.js + n8n. V2: yagr-engine (self-contained, replaces n8n entirely) |
| Portability | VPS-only, docker-compose | npm package. Runs anywhere Node.js runs |
| Engine lock-in | Hardcoded to n8n forever | Engine interface — swap backends without rewriting the agent |
The agent tracks what it has deployed:
interface ManagedWorkflow {
id: string; // Internal ID
engine: 'n8n' | 'yagr-engine';
runtimeWorkflowId: string; // ID in the active backend runtime
name: string; // Human-readable name
description: string; // What this automation does
createdFromPrompt: string; // The original user request
createdAt: Date;
updatedAt: Date;
active: boolean;
nodes: string[]; // Node types used (for searchability)
workflowFile?: string; // Local source path (.workflow.ts in V1, *.yagr.py in V2)
summary?: string; // Short natural-language explanation
lastRuntimeState?: 'idle' | 'running' | 'paused' | 'error';
}This registry is not just inventory. It is Yagr's durable working memory about what it has created for a user.
Yagr should be able to re-open a workflow as if re-opening a conversation:
interface WorkflowMemory {
workflowId: string;
createdFromPrompt: string;
lastUserIntent?: string;
lastAgentSummary?: string;
relatedEvents?: string[];
}Examples:
- "Update the Slack alert you built last week to also send SMS"
- "Why does my daily report workflow fail on Mondays?"
- "Disable the onboarding automation you created for me"
The memory object is lightweight because the workflow artifact itself is the real memory. Yagr only stores enough metadata to find it, explain it, and continue the dialogue.
Simple and in-process for V1. No need for PostgreSQL or vector search at launch.
interface ConversationStore {
getHistory(chatId: string, limit?: number): Message[];
addMessage(chatId: string, message: Message): void;
clear(chatId: string): void;
}Backed by a JSON file or SQLite for persistence. No premature optimization.
Credentials are the biggest practical friction point in V1.
Yagr should therefore track credential requirements explicitly for every generated workflow:
interface CredentialRequirement {
nodeName: string;
credentialType: string;
displayName: string;
required: boolean;
status: 'missing' | 'linked' | 'unknown';
helpUrl?: string;
}When Yagr creates a workflow on n8n in V1, it should return:
- the direct link to the created workflow
- the list of missing credential requirements
- the next action the user must take in n8n UI
This keeps the product focused while making the friction explicit and actionable.
Credential handling should improve in staged levels instead of jumping straight to full automation.
Level 0 — MVP
- Deploy workflow
- Return workflow URL
- Return missing credential requirements
- User completes credential setup in n8n UI
Level 1 — Assisted API setup
- Yagr lists existing credentials via n8n API
- Yagr detects reusable credentials that already exist
- Yagr suggests or auto-links matching credentials when safe
Level 2 — Simple credential creation via API
- API key
- bearer token
- username/password
- service-account style secrets
These are good candidates because they are deterministic and don't require browser consent flows.
Level 3 — OAuth-aware setup
- Deferred until UX and security model are explicit
- Likely still handed off to n8n UI in many cases
The rule is simple: Yagr should automate credential flows only when doing so is safer and simpler than redirecting the user.
{
"dependencies": {
"ai": "^4.x", // Vercel AI SDK core
"@ai-sdk/anthropic": "^2.x", // Anthropic provider
"@ai-sdk/openai": "^2.x", // OpenAI provider (optional)
"@yagr/skills": "workspace:*", // Ontology (V1: n8n nodes)
"@yagr/transformer": "workspace:*", // JSON ↔ TypeScript (V1)
"@yagr/cli": "workspace:*", // Sync / deploy (V1)
"zod": "^3.x", // Schema definitions
"telegraf": "^4.x" // Telegram gateway (P1)
}
}Note: @yagr/* are the new package names. During transition, the actual workspace references may still point to @n8n-as-code/* and n8nac — the rename is a separate migration step.
- LangChain / LangGraph — too heavy, not needed
- Mastra — competing framework, we'd depend on their opinions
- PostgreSQL / Supabase — premature for V1
- Express / Fastify — stdlib
httpor lightweight framework sufficient - Any vector DB — not needed for V1, add when memory becomes a feature
- yagr-engine as a dependency — it's a separate project, connected via Engine interface when ready
-
packages/yagr/package scaffold -
Engineinterface +N8nEngineimplementation -
YagrAgentclass with first Vercel AI SDK run loop - Tools: searchNodes, nodeInfo, searchTemplates, generateWorkflow, validate, deploy, list/manage workflow (engine-backed scaffold)
- CLI gateway (interactive terminal scaffold)
- Single-instance configuration model: one user configures one active backend instance at a time (same model as n8n-as-code today)
- After deploy, return workflow URL + explicit missing-credentials checklist
- End-to-end: "Create a workflow that sends a Slack message every morning" → deployed on n8n
- HTTP API gateway (REST + SSE)
- Telegram gateway
- Managed workflow registry (list, update, delete deployed automations)
- Conversation history persistence
- Assisted credential linking using existing n8n API credential endpoints
- Docker one-liner:
docker run -e N8N_HOST=... -e ANTHROPIC_API_KEY=... yagr
- Rename GitHub repo
n8n-as-code→yagr - Republish npm packages under
@yagr/*(keep@n8n-as-code/*as deprecated aliases) - New README with hero GIF and consumer pitch
-
npx create-yagr— scaffold in 30 seconds -
yagr.devwebsite
-
YagrNativeEngineimplementingEngineinterface - RPC bridge to yagr-engine Python core
- Agent generates
*.yagr.pyDSL files instead of n8n JSON - yagr-engine runner as execution backend (n8n no longer required)
- Library nodes covering same integrations as n8n (Slack, Sheets, Twilio...)
- Unified visual editor (React Flow from yagr-engine + VS Code host)
Branding— Decided. Product = Yagr (agent layer). Engine = yagr-engine (replaces n8n — same integrations, AI-native architecture). V1 uses n8n as backend, V2 uses yagr-engine as self-contained replacement.Scope control— Decided. Stay laser-focused on "text → automation". No generic assistant surface for now. Workflows themselves are Yagr's durable memory.Multi-instance— Decided. Single instance for the MVP. One user configures one active backend instance at a time, matching the current n8n-as-code operating model. Multi-instance can come later.Credential management— Decided. V1 uses a hybrid approach. Yagr deploys the workflow, returns a direct link to it, and shows a missing-credentials checklist. n8n's public API does support credential operations (GET/POST/PATCH/DELETE /credentialsand schema lookup), so this friction should be reduced quickly after MVP, but not fully automated on day one because OAuth and secret-handling add security and UX complexity.Engine bridge protocol (V1)— Resolved. Not applicable in V1 beyond the existing n8n API and current TypeScript package layer. The true engine bridge problem starts in V2, where the recommendation remains stdio JSONL RPC to talk to yagr-engine.Pricing/model— Decided. Self-hosted open source first. Cloud offer later on.