The goal of this repository is to determine whether coding agents exhibit a converging runtime behavior pattern that can be formalized independently of any single product, model vendor, or orchestration framework.
A runtime pattern is a repeated behavior or control structure that shapes how an agent executes work. The pattern must affect execution semantics rather than only presentation or branding.
Examples:
- how tool calls are initiated and reintegrated,
- how completion is attempted and validated,
- how failures re-enter the loop,
- how state transitions are represented,
- how context compaction is triggered,
- how subagents are bounded and observed.
Counterexamples:
- visual styling,
- marketing taxonomy,
- vendor-specific naming,
- domain-specific repository rules.
Preferred sources, in order:
- source code,
- official documentation,
- design discussions and issue threads,
- demos and observed behavior,
- secondary commentary.
Whenever evidence comes from observation rather than code or docs, label it clearly.
For each system, collect evidence against the same analytical axes:
- execution loop,
- completion semantics,
- tool failure model,
- state machine,
- context management,
- subagents and delegation,
- policy and hooks,
- UI and telemetry coupling,
- permission model,
- extensibility surface.
Use these tests:
- Does the behavior solve a generic runtime problem?
- Does it appear in more than one system or seem structurally reusable?
- Can it be expressed without referencing one product shell?
- Would removing the behavior materially weaken robustness?
If the answer is no, the behavior likely belongs in product-specific analysis rather than in the shared runtime contract.
high-confidence convergence: repeated across multiple systems with strong evidence.medium-confidence convergence: visible in several systems but with partial evidence or inconsistent implementation.emerging but unstable: promising pattern with limited or shifting evidence.speculative proposal: design hypothesis not yet strongly supported by observed implementations.
- Prefer narrowing a claim over defending an overstated one.
- Update pattern pages when a new system weakens or complicates an earlier conclusion.
- Preserve open questions rather than forcing false certainty.
- Keep the synthesis honest about where convergence stops.