| title | Coverage Analysis |
|---|---|
| parent | User Guide |
| nav_order | 6 |
Three-dimensional coverage analysis: Documentation, Acceptance Criteria, and Automation.
Related: CLI Reference | Configuration | Test Format
SPECTRA produces a unified coverage report with three sections:
| Type | What it measures | Data source |
|---|---|---|
| Documentation | Which docs have linked test cases | source_refs field in test case frontmatter matched against docs/ |
| Acceptance Criteria | Which criteria are covered | criteria field in test case frontmatter + _criteria_index.yaml |
| Automation | Which test cases have automation code | automated_by field in test case frontmatter + code scanning |
Spec 037 — boundary coverage from ISTQB techniques: Test generation now applies six ISTQB test design techniques systematically (EP, BVA, DT, ST, EG, UC). Suites generated after spec 037 typically have 50%+ more test cases in the
boundaryandnegativecategories than pre-037 suites on the same docs. The analysis output exposes this via atechnique_breakdownmap alongside the existing categorybreakdown.Spec 038 — algorithmic precision (optional): When the optional Testimize integration is enabled, the AI replaces approximated boundary values with mathematically optimal ones from Testimize's BVA / EP / pairwise / ABC algorithms. Disabled by default.
# Console output (three sections)
spectra ai analyze --coverage
# JSON output (three top-level keys)
spectra ai analyze --coverage --format json --output coverage.json
# Markdown output
spectra ai analyze --coverage --format markdown --output coverage.md
# Detailed output
spectra ai analyze --coverage --verbosity detailed{
"generated_at": "2026-03-20T10:00:00Z",
"documentation_coverage": {
"total_docs": 4,
"covered_docs": 3,
"percentage": 75.00,
"details": [
{ "doc": "docs/auth.md", "test_count": 28, "covered": true, "test_ids": ["TC-001", "..."] },
{ "doc": "docs/admin.md", "test_count": 0, "covered": false, "test_ids": [] }
]
},
"acceptance_criteria_coverage": {
"total": 5,
"covered": 3,
"percentage": 60.00,
"has_criteria_file": true,
"details": [
{ "id": "AC-042", "title": "Payment rejection", "tests": ["TC-134"], "covered": true },
{ "id": "AC-043", "title": "Expired card", "tests": [], "covered": false }
]
},
"automation_coverage": {
"total_tests": 40,
"automated": 12,
"percentage": 30.00,
"by_suite": [ ... ],
"unlinked_tests": [ ... ],
"orphaned_automation": [ ... ],
"broken_links": [ ... ]
}
}Measures which documentation files have at least one test case referencing them via source_refs.
For each doc in docs/, SPECTRA checks if any test case file has it in its source_refs frontmatter field.
Measures which acceptance criteria are covered by test cases.
When a criteria index file exists, SPECTRA cross-references the defined criteria with criteria fields in test case frontmatter. This reveals which criteria have no test cases.
When no criteria file exists, SPECTRA discovers criteria from test case frontmatter only and reports them as a flat list. has_criteria_file is false.
Create docs/criteria/_criteria_index.yaml (or use spectra ai analyze --extract-criteria to auto-generate):
criteria:
- id: AC-001
title: "User can log in with valid credentials"
source: docs/authentication.md
priority: high
- id: AC-002
title: "System rejects invalid passwords"
source: docs/authentication.md
priority: high
- id: AC-003
title: "Admin panel access control"
source: docs/admin.md
priority: highThe path to this file is configured via coverage.criteria_file in spectra.config.json.
Measures which test cases have linked automation code (via automated_by field or code scanning).
Reports include:
- By suite: Per-suite automation percentages
- Unlinked test cases: Test cases with no automation reference
- Orphaned automation: Automation files referencing non-existent test cases
- Broken links:
automated_bypaths pointing to missing files
The --auto-link flag scans your automation code for test ID references and writes automated_by back into test case YAML frontmatter:
spectra ai analyze --coverage --auto-linkHow it works:
- Scans files matching
file_extensionsinautomation_dirs - Matches test IDs using
scan_patternstemplates (e.g.,[TestCase("TC-001")]) - For each match, updates the test case file's
automated_byfrontmatter field
Scan patterns are templates where {id} is replaced with the test ID regex. Examples:
{
"coverage": {
"scan_patterns": [
"[TestCase(\"{id}\")]",
"[ManualTestCase(\"{id}\")]",
"@pytest.mark.manual_test(\"{id}\")",
"groups = {\"{id}\"}"
],
"file_extensions": [".cs", ".java", ".py", ".ts"]
}
}If scan_patterns is empty, SPECTRA falls back to the legacy attribute_patterns regex list.
When running spectra ai generate, the analysis step is coverage-aware for existing suites. Before identifying testable behaviors, the analyzer builds a coverage snapshot from:
_index.json: Existing test titles, criteria links, and source refs.criteria.yamlfiles: All acceptance criteria, cross-referenced against testsdocs/_index.md: Documentation sections, cross-referenced against test source refs
The AI receives this coverage context and only recommends tests for genuine gaps — uncovered criteria and undocumented sections. For a mature suite with 231 tests covering 38/41 criteria, the analysis recommends ~8 new tests (the actual gap) instead of 139.
For suites with more than 500 tests, the analyzer switches to summary mode to conserve prompt tokens: only coverage statistics and uncovered items are sent, not the full title list.
New suites with no _index.json or criteria files work exactly as before — the coverage context is simply omitted.
Full coverage settings in spectra.config.json:
{
"coverage": {
"automation_dirs": ["tests", "test", "spec", "e2e"],
"scan_patterns": ["[TestCase(\"{id}\")]", "@pytest.mark.manual_test(\"{id}\")"],
"file_extensions": [".cs", ".java", ".py", ".ts"],
"criteria_file": "docs/criteria/_criteria_index.yaml"
}
}See Configuration Reference for all coverage options.
The dashboard Coverage tab provides four visualizations:
A test case health distribution chart at the top of the Coverage tab:
- Green — Automated test cases (have
automated_by) - Yellow — Manual-only test cases (have
source_refsbut noautomated_by) - Red — Unlinked test cases (neither
source_refsnorautomated_by) - Center label shows total test case count; hover segments for tooltips
Three stacked cards — one per coverage type — with:
- Percentage and fill bar (green >= 80%, yellow >= 50%, red < 50%)
- "Show details" toggle that expands a per-item breakdown list
- Documentation: each doc file with test case count and covered/uncovered icon
- Acceptance Criteria: each criterion ID, title, linked test case IDs
- Automation: per-suite breakdown (suite name, automated/total, percentage)
When coverage data is missing or unconfigured, cards show actionable messages:
- Acceptance Criteria: "No acceptance criteria tracked yet" with setup instructions
- Automation: "No automation links detected" with
--auto-linkinstructions - Documentation: "All documents have test case coverage!" success message when at 100%
A block visualization below the progress bars showing suites sized by test case count and colored by automation coverage:
- Green — >= 50% automated
- Yellow — > 0% but < 50% automated
- Red — 0% automated
- Hover for suite details; click to navigate to suite test case list