Quick Start: New Version Onboarding
Single-page runbook for onboarding a new product version. This condenses the 9 detailed version-update guides into one action-oriented checklist.
Audience: Anyone (human or LLM) starting a version transition.
Prerequisites
| What | Where to put it | Notes |
|---|---|---|
| New Confluence exports (.docx) | input/ | Word format, not PDF/HTML |
| New code (git tag) | code/ | Checkout the new tag, e.g. v3.1.0 |
| Note old tag | — | e.g. v3.0.1 (the baseline for diffs) |
The Pipeline
PREP Confluence .docx → input/ Code tag → code/
│ │
PHASE 1 ┌───────────────────────────────────┐
Detect │ Change Manifest (what's new) │
└──────────────┬────────────────────┘
│ ← Human approves
PHASE 2 ┌──────┬──────┼──────┬──────┐
Update Docs │ SRS │ SDS │ STD │ Code │ (parallel agents)
└──────┴──────┴──┬───┴──────┘
│ ← Human approves
PHASE 3 ┌────────────────┴───────────────┐
Write Tests │ Behat API + Browser scenarios │ (wave-based agents)
└────────────────┬───────────────┘
│
PHASE 4 ┌────────────────┴───────────────┐
Validate │ TM generation → Test run → TM │ (2 scripts)
└────────────────┬───────────────┘
│
PHASE 5 Freeze version
Phase 1: Detect Changes
Goal: Produce a Change Manifest listing every NEW / MODIFIED / DEPRECATED / REMOVED item.
1a. Convert Confluence exports (if any)
docusaurus/scripts/docx2adoc.sh input/new-export.docx input/new-export.adoc
Then classify new requirements using the SRS Authoring Guide.
1b. Generate code diff
OLD=v3.0.1
NEW=v3.1.0
# Overview
git diff $OLD..$NEW --stat
# By area
git diff $OLD..$NEW -- app/ # backend
git diff $OLD..$NEW -- resources/js/ # frontend
git diff $OLD..$NEW -- database/migrations/ # schema
git diff $OLD..$NEW -- config/ routes/ # config + routes
git log $OLD..$NEW --oneline # commit history
1c. Find affected REQs
# Which existing requirements are touched by changed code?
git diff $OLD..$NEW --name-only -- app/ resources/js/ | \
xargs grep -oh "REQ-[A-Z]*-[0-9]*" 2>/dev/null | sort -u
1d. Write the Change Manifest
Create input/change-manifest-v{NEW}.md with this structure:
# Change Manifest: v3.0.1 → v3.1.0
| ID | Type | Description | Affected SRS | Affected SDS | Affected STD |
|----|------|-------------|--------------|--------------|--------------|
| CHG-001 | NEW | New export format | fileimport.md | sds-domain-fileimport.md | std-fileimport.md |
| CHG-002 | MOD | Auth timeout change | user-management.md | sds-05-security-architecture.md | std-user-management.md |
Human gate: Review and approve the manifest before proceeding.
Deep dive: Change Detection Guide
Phase 2: Update Documentation
Update docs in dependency order: SRS first (defines requirements), then SDS (design), then STD (tests), then code mappings.
Each doc type can be parallelized internally (one agent per domain/file).
2a. SRS — Requirements
For each item in the Change Manifest, update the relevant docusaurus/docs/srs/*.md file:
| Change Type | Action |
|---|---|
| NEW | Add REQ-DOMAIN-NNN using next available ID. Never reuse deprecated IDs. |
| MOD | Update existing REQ (keep same ID). Add Reviewer Notes entry. |
| DEP | Move to archive section with deprecation notice. |
| REM | Remove and note in Reviewer Notes. |
Key rules:
- REQ IDs are immutable — never renumber
- Every REQ needs: Statement, Acceptance Criteria, Traceability (Source)
- Run
python3 docusaurus/scripts/add-req-anchors.py --dry-runto verify anchors
Deep dive: SRS Update Guide, SRS Authoring Guide
2b. SDS — Design Docs
Update docusaurus/docs/sds/ to match code changes:
| Changed Area | Update File |
|---|---|
| Architecture/components | sds-03-architecture-overview.md |
| Database schema | sds-04-data-architecture.md |
| Auth/permissions | sds-05-security-architecture.md |
| Domain behavior | sds/domains/sds-domain-*.md |
| Rule logic | sds/rules/sds-rules-*.md |
| API endpoints | sds/reference/sds-ref-api.md |
| Config options | sds/reference/sds-ref-config.md |
Key rules:
- Add
{#anchor-name}for new sections - Update Mermaid diagrams if architecture changed
- Update
sds-master-index.mdfor new sections
Deep dive: SDS Update Guide, SDS Authoring Guide
2c. STD — Test Specifications
For each new/modified REQ, update the corresponding docusaurus/docs/std/ file:
- Add test vectors (TVs) for new requirements
- Update TVs for modified requirements
- Archive TVs for deprecated requirements
- Coverage target: 100% REQ coverage, 80%+ AC coverage
Deep dive: STD Update Guide
2d. Code Mappings
Update tests/catalogue/code_tags.json with new file-to-requirement mappings:
{
"by_file": {
"app/Http/Controllers/NewController.php": {
"srs": ["REQ-DOMAIN-NNN"],
"sdd": ["sds-domain-xxx.md#anchor"]
}
}
}
- New code implementing new REQs → add entries to all three views (
by_file,by_srs,by_sdd) - Modified code → update existing entries if REQ scope changed
- Removed code → remove orphaned entries from all three views
Deep dive: Code Mapping Update
Human gate: Review all doc changes before proceeding to tests.
Phase 3: Write Tests
Implement the test vectors defined in Phase 2c as executable Behat scenarios.
Agent pattern
Use wave-based parallel agents. 8-10 agents per wave, one QR pass after each wave.
| Test Type | Guide | Max Parallel |
|---|---|---|
| API (Behat) | Behat Creation Guide | 8-10 (limited by DB pool) |
| Browser (Mink) | Browser Test Guide | 8 (limited by ports) |
Per-wave cycle
1. Pre-assign: BT keys, DB slots, output filenames
2. Launch creation agents (one file per agent)
3. Wait for completion (never poll)
4. Launch QR agents (review tags, assertions, STD alignment)
5. Fix all WRONG items
6. Commit
7. Repeat until all TVs covered
Deep dive: Agent Orchestration Guide, STD Reconciliation Guide
Phase 4: Validate
This is where the two key scripts close the loop. Run them in order.
Step 1: Generate the Unified Traceability Matrix
python3 docusaurus/scripts/generate-unified-traceability.py --render-md
This ingests all sources and produces a single traceability JSON + markdown views:
| Source | What it reads |
|---|---|
SRS files (docs/srs/) | REQ registry — every requirement ID |
SDS files (docs/sds/) | Design links — which SDS section covers which REQ |
STD files (docs/std/) | Test vectors — which TVs cover which REQ |
code_tags.json | Code traceability — which code files implement which REQ |
feature_catalogue.json | Test mapping — which Behat scenarios test which TV |
Output:
tests/catalogue/unified-traceability.json— machine-readable source of truthdocusaurus/docs/traceability/unified-traceability-matrix.md— full TM viewdocusaurus/docs/traceability/unified-coverage-report.md— coverage summarydocusaurus/docs/traceability/unified-release-checklist.md— release readiness
Any REQ with missing columns = a gap to fix.
Step 2: Run the full test suite + map results to TM
python3 tests/scripts/behat-optimizer.py run \
--workers 10 \
--pre-migrate \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md
This single command does everything:
- Scans ~275 feature files, groups by config (~189 groups)
- Migrates one template DB, clones to 10 pool DBs
- Generates consolidated
.featurefiles with@USE_SAME_CONFIG - Runs all ~760 scenarios across 10 parallel workers (~25-30 min)
- Writes
results.json+traceability-report.jsonto/tmp/behat-logs/ - Prints the traceability report to terminal
- Writes the markdown STR to the
--render-mdpath - Cleans up consolidated files after
Step 3: Review the results
The traceability report shows:
- Per-REQ: which tests cover it, pass/fail status
- Gaps: REQs with no test coverage
- Failures: tests that ran but didn't pass
Fix gaps and failures, then re-run Steps 1-2 until clean.
Step 4: Validate docs build
# Broken links (target: 0)
cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved"
# Mermaid diagrams (target: 0 errors)
bash docusaurus/scripts/validate-mermaid.sh docusaurus/docs
# Jira references (convert BT-*, CST-*, PCRAI-* to links)
./docusaurus/scripts/link-jira-refs.sh -n # dry-run first
./docusaurus/scripts/link-jira-refs.sh # apply
Optional: Regenerate report without re-running tests
python3 tests/scripts/behat-optimizer.py report \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md
Optional: JSON output for programmatic use
python3 tests/scripts/behat-optimizer.py report \
--traceability tests/catalogue/unified-traceability.json \
--json --output /tmp/behat-logs/traceability-report.json
Phase 5: Freeze Version
# 1. Create Docusaurus versioned snapshot
cd docusaurus && npm run docusaurus docs:version 3.1.0
# 2. Tag the repo
git tag v3.1.0-docs-complete
# 3. Update CLAUDE.md status section with new version info
Quick Reference: All Commands
| What | Command |
|---|---|
| Convert Confluence export | docusaurus/scripts/docx2adoc.sh input/file.docx input/file.adoc |
| Generate traceability matrix | python3 docusaurus/scripts/generate-unified-traceability.py --render-md |
| Validate traceability | python3 docusaurus/scripts/generate-unified-traceability.py --validate |
| Run full test suite + TM report | python3 tests/scripts/behat-optimizer.py run --workers 10 --pre-migrate --traceability tests/catalogue/unified-traceability.json --render-md docusaurus/docs/traceability/str-release-test-report.md |
| Regenerate report (no re-run) | python3 tests/scripts/behat-optimizer.py report --traceability tests/catalogue/unified-traceability.json --render-md docusaurus/docs/traceability/str-release-test-report.md |
| Check broken links | cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved" |
| Validate Mermaid diagrams | bash docusaurus/scripts/validate-mermaid.sh docusaurus/docs |
| Convert Jira refs to links | ./docusaurus/scripts/link-jira-refs.sh |
| Preview req anchor additions | python3 docusaurus/scripts/add-req-anchors.py --dry-run |
| Preview link fixes | python3 docusaurus/scripts/fix-docusaurus-links.py --docs-root docusaurus/docs --dry-run |
Quick Reference: behat-optimizer Flags
| Flag | What it does |
|---|---|
--workers 10 | Use all 10 pool DBs in parallel |
--pre-migrate | Migrate once, clone schema to all DBs (fast) |
--traceability PATH | Map results to TM after run |
--render-md PATH | Write markdown STR file |
--dry-run | Preview execution plan without running |
--skip-list tests/scripts/behat-skip-list.json | Skip superseded scenarios |
--permanent | Keep consolidated files after run |
--log-dir /path | Custom log dir (default: /tmp/behat-logs) |
--json --output PATH | JSON output for programmatic use |
Detailed Guides (Reference)
Only consult these when you need specifics for a particular step:
| Guide | When to read it |
|---|---|
| Version Update Workflow | Full 7-phase details with exit criteria |
| Change Detection | Inventory extraction patterns, domain mapping |
| SRS Update Guide | REQ ID rules, format, deprecation mechanics |
| SDS Update Guide | Design doc conventions, truth tables |
| STD Update Guide | Test vector format, coverage targets |
| Code Mapping Update | code_tags.json format, validation scripts |
| Agent Orchestration | Parallel agents, DB pool, QR protocol, prompt templates |
| Validation Checklist | Full pre-release quality gates, sign-off template |
| Confluence Conversion | 6-stage Pandoc pipeline for .docx |
| Behat Creation | 40 gotchas, config pitfalls, subagent patterns |
| Browser Tests | Mink/Chrome, step defs, parallel execution |
| STD Reconciliation | Mapping TVs to existing Behat tests |