Skip to content

Commit ae72299

Browse files
committed
chore: fix lint formatting after rebase
https://claude.ai/code/session_0191H4s9PX5VxKfmKnrq3aYF
1 parent dad027f commit ae72299

17 files changed

Lines changed: 81 additions & 29 deletions

docs/research/rest-owl/addyosmani-agentic-engineering-deep-dive.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,32 +6,38 @@
66
## Key Takeaways
77

88
### Planning as Foundation
9+
910
- "You start with a plan. Before prompting anything, you write a design doc or spec."
1011
- Planning breaks complex projects into well-defined tasks and establishes architecture before AI involvement
1112
- This is what distinguishes agentic engineering from vibe coding
1213

1314
### Task Scoping & Review
15+
1416
- "Give the AI agent a well-scoped task from your plan. It generates code. You review with the same rigor you'd apply to a human teammate's PR."
1517
- Human remains architect and quality gatekeeper
1618

1719
### Testing as the Critical Differentiator
20+
1821
- "With a solid test suite, an AI agent can iterate in a loop until tests pass, giving you high confidence"
1922
- Without comprehensive tests, systems become fragile and unreliable
2023
- Testing is what separates professional agentic engineering from amateur vibe coding
2124

2225
### Success Patterns
26+
2327
- Specification quality directly improves AI output
2428
- Comprehensive test suites enable confident delegation
2529
- Clean architecture reduces hallucinations
2630
- **AI rewards good engineering more than traditional coding does**
2731

2832
### Critical Failure Modes
33+
2934
- Skipping design thinking
3035
- Not reviewing diffs or understanding generated code
3136
- Absence of meaningful test coverage
3237
- Treating AI as magic rather than a tool requiring discipline
3338

3439
### Skill Gap Warning
40+
3541
- Agentic engineering **disproportionately benefits senior engineers**
3642
- Junior developers risk building code they cannot debug — "dangerous skill atrophy"
3743

@@ -46,6 +52,7 @@ This validates our entire architecture and highlights one gap:
4652
## Design Implication
4753

4854
Add a Phase 7 or post-build "handoff" step that:
55+
4956
- Walks the user through the architecture decisions
5057
- Explains key code patterns used
5158
- Identifies areas that will need human attention as the project grows

docs/research/rest-owl/biggo-missing-middle-tutorials.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# The "Missing Middle" in Tutorials and Instructions
22

33
## Sources
4+
45
- **BigGo Finance - Developer Tutorials Leave Beginners Lost**: https://finance.biggo.com/news/202509220713_Developer_Tutorials_Too_Complex_for_Beginners
56
- **CG Cookie - Reliance on Tutorials**: https://cgcookie.com/community/6394-does-anyone-else-get-the-feeling-that-they-re-reliant-on-tutorials-for-everything-or-just-not-retaining-knowledge-in-general
67
- **DEV Community - Identifying Knowledge Gaps**: https://dev.to/bgord/how-do-i-identify-my-knowledge-gaps-and-learn-4mlc
@@ -13,6 +14,7 @@ Content exists for complete beginners ("Hello World") and for experts (advanced
1314
## The Curse of Knowledge
1415

1516
Tutorial authors suffer from the **curse of knowledge**: the inability to remember what it was like not to know something. This causes them to:
17+
1618
- Skip steps that seem "obvious" to them
1719
- Use jargon without explanation
1820
- Assume prerequisite knowledge without stating it
@@ -21,25 +23,29 @@ Tutorial authors suffer from the **curse of knowledge**: the inability to rememb
2123
## Key Observations
2224

2325
### Tutorials as Peer Communication
26+
2427
Many tutorials function like academic papers -- sharing discoveries among professionals who already understand the ecosystem. They're not actually teaching; they're showing off techniques to peers.
2528

2629
### The "Crumble" Effect
30+
2731
From CG Cookie: "That feeling when you're a beginner, a few months/years in and you've watched tutorials and learnt concepts, but when it comes to making something from scratch you just crumble."
2832

2933
This is the exact "rest of the owl" moment -- you can follow along but can't reproduce independently.
3034

3135
### Invisible Prerequisites
36+
3237
Theoretical knowledge gaps are particularly hard to identify. "It's usually hidden in a talk or a well written article or post." You discover what you didn't know by accident, not by systematic study.
3338

3439
### Two Principles for Bridging the Gap
40+
3541
1. **Repetition** of important information
3642
2. **Explicit explanation** of assumed knowledge
3743

3844
## Relevance to Plugin Development
3945

4046
1. **AI doesn't have the curse of knowledge** -- it can be prompted to explain every intermediate step, state every assumption, and define every term. This is a fundamental advantage.
4147
2. **A spec-generation plugin should detect and fill assumption gaps** -- when generating a plan, it should make implicit knowledge explicit.
42-
3. **The "crumble" effect maps to the scaffolding problem** -- users can follow AI-generated code but can't modify or extend it. Better specs would help users understand *why* the code is structured as it is.
48+
3. **The "crumble" effect maps to the scaffolding problem** -- users can follow AI-generated code but can't modify or extend it. Better specs would help users understand _why_ the code is structured as it is.
4349
4. **Systematic gap identification is a product feature** -- instead of discovering gaps by accident, a plugin could analyze what the user knows (from their prompt/context) and proactively explain what they'll need to know.
4450

4551
## Criticism

docs/research/rest-owl/github-spec-driven-development.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Spec-Driven Development: From Rough Idea to Detailed Specification
22

33
## Sources
4+
45
- **GitHub Blog - Spec Kit**: https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/
56
- **Martin Fowler - SDD Tools**: https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html
67
- **Augment Code Guide**: https://www.augmentcode.com/guides/what-is-spec-driven-development
@@ -15,7 +16,7 @@ A development paradigm where well-crafted software requirement specifications se
1516

1617
## The Core Workflow
1718

18-
1. **Specify**: Share your system idea with an AI agent (the *what* and *why*). Agent generates a detailed specification.
19+
1. **Specify**: Share your system idea with an AI agent (the _what_ and _why_). Agent generates a detailed specification.
1920
2. **Plan**: Define the technical approach -- frameworks, tools, languages.
2021
3. **Tasks**: Break everything into small, structured work packages. Instead of "build authentication," you get "create a user registration endpoint that validates email format."
2122
4. **Implement**: Agent implements each work package.
@@ -33,6 +34,7 @@ A development paradigm where well-crafted software requirement specifications se
3334
## SDD vs Vibe Coding
3435

3536
SDD is explicitly positioned as the antidote to vibe coding's weaknesses:
37+
3638
- Vibe coding: "describe goal, get code back, often looks right but doesn't quite work"
3739
- SDD: "write complete requirements and technical specs before passing to AI agent"
3840

docs/research/rest-owl/github-spec-kit-architecture.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# GitHub Spec Kit: Open Source Spec-Driven Development Toolkit
22

33
## Sources
4+
45
- **GitHub Repo**: https://github.com/github/spec-kit
56
- **GitHub Blog Announcement**: https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/
67
- **Visual Studio Magazine**: https://visualstudiomagazine.com/articles/2025/09/03/github-open-sources-kit-for-spec-driven-ai-development.aspx
@@ -19,7 +20,7 @@ Open source toolkit (MIT license, released Sept 2025) that structures how AI cod
1920
## Workflow (Slash Commands)
2021

2122
1. **`/speckit.constitution`** -- Non-negotiable project rules. Other commands refer back to this.
22-
2. **`/speckit.specify`** -- Describe features, pages, user flow (the *what*).
23+
2. **`/speckit.specify`** -- Describe features, pages, user flow (the _what_).
2324
3. **`/speckit.plan`** -- Generate technical architecture and implementation plan.
2425
4. **`/speckit.tasks`** -- Break plan into small, testable implementation tasks.
2526
5. **Implement** -- AI agent executes tasks to generate code.
@@ -40,11 +41,11 @@ Optional: `/speckit.clarify` (resolve ambiguity), `/speckit.analyze` (consistenc
4041

4142
## Use Cases
4243

43-
| Scenario | How Spec Kit Helps |
44-
|----------|-------------------|
45-
| Greenfield (zero-to-one) | Upfront spec ensures AI builds what you intend |
46-
| Legacy modernization | Capture business logic in spec, rebuild without tech debt |
47-
| Brownfield extensions | Fit into existing codebases without prior specs |
44+
| Scenario | How Spec Kit Helps |
45+
| ------------------------ | --------------------------------------------------------- |
46+
| Greenfield (zero-to-one) | Upfront spec ensures AI builds what you intend |
47+
| Legacy modernization | Capture business logic in spec, rebuild without tech debt |
48+
| Brownfield extensions | Fit into existing codebases without prior specs |
4849

4950
## Key Benefits
5051

docs/research/rest-owl/karpathy-agentic-engineering.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
# Agentic Engineering — Andrej Karpathy (Feb 2026)
22

33
**Sources**:
4+
45
- https://addyosmani.com/blog/agentic-engineering/ (Addy Osmani's comprehensive summary)
56
- https://www.ibm.com/think/topics/agentic-engineering (IBM)
67
- https://www.glideapps.com/blog/what-is-agentic-engineering (Glide)
78
- https://www.nxcode.io/resources/news/agentic-engineering-complete-guide-vibe-coding-ai-agents-2026 (NxCode)
8-
**Date read**: 2026-03-20
9+
**Date read**: 2026-03-20
910

1011
## Key Takeaways
1112

docs/research/rest-owl/kiro-aws-spec-driven-ide.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Kiro (AWS): Spec-Driven Agentic IDE
22

33
## Sources
4+
45
- **InfoQ - Beyond Vibe Coding**: https://www.infoq.com/news/2025/08/aws-kiro-spec-driven-agent/
56
- **The New Stack - Testing Kiro**: https://thenewstack.io/aws-kiro-testing-an-ai-ide-with-a-spec-driven-approach/
67
- **DEV Community - What I Learned Using SDD with Kiro**: https://dev.to/aws-builders/what-i-learned-using-specification-driven-development-with-kiro-pdj
@@ -27,13 +28,15 @@ This prevents "spaghetti code generation" from free-wheeling chat agents.
2728
## User Experience Reports
2829

2930
### Positive
31+
3032
- "Spent more time upfront articulating what I wanted to build, but then could step back and let it execute"
3133
- "The difference between being a hands-on manager versus setting clear expectations and trusting the process"
3234
- "Kiro did not invent good engineering practices. It made them unavoidable."
3335
- "It taught users how to think before writing code. That turns out to be the hardest and most valuable part of engineering."
3436
- Teams report "reducing time to customer value from weeks to days"
3537

3638
### Negative
39+
3740
- 50 interactions/month on free tier runs out fast
3841
- Spec overhead is friction for simple tasks
3942
- Learning curve for EARS notation and formal spec writing

docs/research/rest-owl/martinfowler-sdd-tools-critique.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Martin Fowler (Birgitta Boeckeler): Critical Analysis of SDD Tools
22

33
## Source
4+
45
- **Martin Fowler / Birgitta Boeckeler - "Understanding Spec-Driven Development: Kiro, spec-kit, and Tessl"**: https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html
56

67
## Overview
@@ -10,42 +11,51 @@ This is a deeply critical, hands-on evaluation of three SDD tools by Birgitta Bo
1011
## Three Levels of SDD
1112

1213
The author distinguishes three implementation levels:
14+
1315
1. **Spec-first**: Specs guide initial development, then are discarded
1416
2. **Spec-anchored**: Specs persist for ongoing feature evolution
1517
3. **Spec-as-source**: Specs become the primary maintained artifact; code is generated from them
1618

1719
## Tool-by-Tool Assessment
1820

1921
### Kiro (AWS)
22+
2023
- Lightweight, intuitive three-step workflow: Requirements -> Design -> Tasks
2124
- But: fixed workflows unsuitable for varying problem sizes
2225
- A small bug generated "4 user stories with 16 acceptance criteria" -- massive overkill
2326

2427
### GitHub Spec Kit
28+
2529
- Customizable, uses a "constitution" to enforce architectural principles
2630
- But: generated excessive markdown files that were "repetitive," "verbose and tedious to review"
2731
- The author states she would **"rather review code than all these markdown files"**
2832

2933
### Tessl
34+
3035
- Only tool pursuing spec-anchored and spec-as-source approaches
3136
- One-to-one spec-to-file mapping reduces LLM interpretation errors
3237
- Most ambitious but also most unproven
3338

3439
## Critical Weaknesses (Across All Tools)
3540

3641
### Workflow Mismatch
42+
3743
Fixed workflows don't fit all problem sizes. Using heavyweight spec processes for small fixes creates absurd overhead.
3844

3945
### Review Burden
46+
4047
SDD doesn't eliminate review -- it shifts it from code review to spec review, and the specs can be MORE tedious to review than code.
4148

4249
### Instruction Non-Compliance
50+
4351
Despite comprehensive specifications, agents frequently ignored instructions or misinterpreted existing code as new specifications, creating duplicates. The spec doesn't guarantee the AI will follow it.
4452

4553
### Unclear Target Users
54+
4655
Documentation doesn't clarify whether SDD suits small fixes, large features, or requires cross-functional teams.
4756

4857
### Semantic Diffusion
58+
4959
The term "spec-driven development" is already poorly defined and experiencing semantic diffusion -- "spec" is becoming synonymous with "detailed prompt."
5060

5161
## The MDD Parallel (Critical Warning)

docs/research/rest-owl/medium-ai-scaffolding-platforms.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# AI-Assisted Scaffolding: Tools That Generate Complete Projects from Descriptions
22

33
## Sources
4+
45
- **Medium - AI Coding Platform Wars 2026**: https://medium.com/@aftab001x/the-2026-ai-coding-platform-wars-replit-vs-windsurf-vs-bolt-new-f908b9f76325
56
- **Anna Arteeva - AI Prototyping Stack Comparison**: https://annaarteeva.medium.com/choosing-your-ai-prototyping-stack-lovable-v0-bolt-replit-cursor-magic-patterns-compared-9a5194f163e9
67
- **Mocha - Best AI App Builder 2026**: https://getmocha.com/blog/best-ai-app-builder-2026/
@@ -14,24 +15,28 @@ Vibe coding went from a meme to a $50B+ market. These are scaffolding generators
1415
## Key Platforms
1516

1617
### Bolt.new (StackBlitz)
18+
1719
- "Prompt to full stack app" in the browser
1820
- Scaffolds a project in ~8-10 minutes
1921
- Most framework flexibility
2022
- Bolt v2 (Oct 2025): autonomous debugging reducing error loops by 98%
2123

2224
### Lovable (formerly GPT Engineer)
25+
2326
- Produces the cleanest React code
2427
- Bi-directional GitHub sync (edit in Lovable or external IDE)
2528
- Hit $100M ARR in 8 months -- potentially fastest-growing startup in history
2629
- Very welcoming, non-intimidating interface
2730

2831
### Replit
32+
2933
- Full cloud-based development environment with AI agent
3034
- Autonomous AI Agent 3: plans, codes, and refines end-to-end
3135
- Revenue jumped $10M to $100M in 9 months after launching Agent
3236
- Most autonomous with 30+ integrations
3337

3438
### Others
39+
3540
- **v0** (Vercel): UI component generation from descriptions
3641
- **Cursor**: IDE with AI integration for technical users
3742
- **Windsurf**: AI-powered IDE competitor
@@ -41,6 +46,7 @@ Vibe coding went from a meme to a $50B+ market. These are scaffolding generators
4146
**These tools create 60-80% of boilerplate. You finish the last 20-40% that requires judgment, domain knowledge, and debugging skills.**
4247

4348
Key limitations:
49+
4450
- None generate production-ready code out of the box
4551
- Demos compress 40 hours of work into 40 minutes by skipping everything that makes software production-grade
4652
- Beautiful mockups with clean code that you can't actually deploy without technical help

docs/research/rest-owl/playwright-visual-regression-best-practices.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,12 @@
11
# Playwright Visual Regression Testing — Best Practices 2025-2026
22

33
**Sources**:
4+
45
- https://blog.scottlogic.com/2025/08/21/making-visual-comparison-test-maintenance-easier-with-github-actions.html
56
- https://www.duncanmackenzie.net/blog/visual-regression-testing/
67
- https://oneuptime.com/blog/post/2026-01-27-playwright-visual-testing/view
78
- https://testdino.com/blog/playwright-visual-testing/
8-
**Date read**: 2026-03-20
9+
**Date read**: 2026-03-20
910

1011
## Key Best Practices
1112

docs/research/rest-owl/pmprompt-competitive-analysis-methodology.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Competitive Analysis Methodologies for Product Development
22

33
## Sources
4+
45
- **PMPrompt - Competitive Analysis Framework for PMs**: https://pmprompt.com/blog/competitive-analysis-framework
56
- **Maven - Competitive Analysis for Product Managers**: https://maven.com/articles/product-competitive-analysis
67
- **Cascade - 6 Competitive Analysis Frameworks**: https://www.cascade.app/blog/competitive-analysis-frameworks
@@ -15,30 +16,33 @@ Systematic process of identifying, analyzing, and understanding competitors' str
1516
## The Three-Phase Framework: Assess, Benchmark, Strategize
1617

1718
### 1. Assess
19+
1820
- Identify direct and indirect competitors
1921
- Map the forces shaping the market
2022
- Include emerging and non-obvious competitors
2123

2224
### 2. Benchmark
25+
2326
- Analyze each rival's business in detail
2427
- Feature matrix comparing key product capabilities
2528
- Synthesize where you stand vs. competitors
2629

2730
### 3. Strategize
31+
2832
- Translate insights into recommendations
2933
- Prioritize areas of opportunity
3034
- Shore up vulnerabilities
3135
- Formulate competitive strategy
3236

3337
## Key Frameworks
3438

35-
| Framework | Use Case |
36-
|-----------|----------|
37-
| SWOT Analysis | Internal strengths/weaknesses + external opportunities/threats per competitor |
38-
| Porter's Five Forces | Industry-level competitive dynamics |
39-
| Feature Matrix | Side-by-side product capability comparison |
40-
| Perceptual Mapping | Visual positioning on 2 axes (e.g., price vs. quality) |
41-
| Growth Share Matrix (BCG) | Portfolio evaluation by market growth + market share |
39+
| Framework | Use Case |
40+
| ------------------------- | ----------------------------------------------------------------------------- |
41+
| SWOT Analysis | Internal strengths/weaknesses + external opportunities/threats per competitor |
42+
| Porter's Five Forces | Industry-level competitive dynamics |
43+
| Feature Matrix | Side-by-side product capability comparison |
44+
| Perceptual Mapping | Visual positioning on 2 axes (e.g., price vs. quality) |
45+
| Growth Share Matrix (BCG) | Portfolio evaluation by market growth + market share |
4246

4347
## Turning Insights into Action
4448

0 commit comments

Comments
 (0)