Skip to main content
Version: 3.0.0

Quick Start: New Version Onboarding

Single-page runbook for onboarding a new product version. This condenses the 9 detailed version-update guides into one action-oriented checklist.

Audience: Anyone (human or LLM) starting a version transition.


Prerequisites

WhatWhere to put itNotes
New Confluence exports (.docx)input/Word format, not PDF/HTML
New code (git tag)code/Checkout the new tag, e.g. v3.1.0
Note old tage.g. v3.0.1 (the baseline for diffs)

The Pipeline

 PREP          Confluence .docx → input/     Code tag → code/
│ │
PHASE 1 ┌───────────────────────────────────┐
Detect │ Change Manifest (what's new) │
└──────────────┬────────────────────┘
│ ← Human approves
PHASE 2 ┌──────┬──────┼──────┬──────┐
Update Docs │ SRS │ SDS │ STD │ Code │ (parallel agents)
└──────┴──────┴──┬───┴──────┘
│ ← Human approves
PHASE 3 ┌────────────────┴───────────────┐
Write Tests │ Behat API + Browser scenarios │ (wave-based agents)
└────────────────┬───────────────┘

PHASE 4 ┌────────────────┴───────────────┐
Validate │ TM generation → Test run → TM │ (2 scripts)
└────────────────┬───────────────┘

PHASE 5 Freeze version

Phase 1: Detect Changes

Goal: Produce a Change Manifest listing every NEW / MODIFIED / DEPRECATED / REMOVED item.

1a. Convert Confluence exports (if any)

docusaurus/scripts/docx2adoc.sh input/new-export.docx input/new-export.adoc

Then classify new requirements using the SRS Authoring Guide.

1b. Generate code diff

OLD=v3.0.1
NEW=v3.1.0

# Overview
git diff $OLD..$NEW --stat

# By area
git diff $OLD..$NEW -- app/ # backend
git diff $OLD..$NEW -- resources/js/ # frontend
git diff $OLD..$NEW -- database/migrations/ # schema
git diff $OLD..$NEW -- config/ routes/ # config + routes
git log $OLD..$NEW --oneline # commit history

1c. Find affected REQs

# Which existing requirements are touched by changed code?
git diff $OLD..$NEW --name-only -- app/ resources/js/ | \
xargs grep -oh "REQ-[A-Z]*-[0-9]*" 2>/dev/null | sort -u

1d. Write the Change Manifest

Create input/change-manifest-v{NEW}.md with this structure:

# Change Manifest: v3.0.1 → v3.1.0

| ID | Type | Description | Affected SRS | Affected SDS | Affected STD |
|----|------|-------------|--------------|--------------|--------------|
| CHG-001 | NEW | New export format | fileimport.md | sds-domain-fileimport.md | std-fileimport.md |
| CHG-002 | MOD | Auth timeout change | user-management.md | sds-05-security-architecture.md | std-user-management.md |

Human gate: Review and approve the manifest before proceeding.

Deep dive: Change Detection Guide


Phase 2: Update Documentation

Update docs in dependency order: SRS first (defines requirements), then SDS (design), then STD (tests), then Code DocRefs.

Each doc type can be parallelized internally (one agent per domain/file).

2a. SRS — Requirements

For each item in the Change Manifest, update the relevant docusaurus/docs/srs/*.md file:

Change TypeAction
NEWAdd REQ-DOMAIN-NNN using next available ID. Never reuse deprecated IDs.
MODUpdate existing REQ (keep same ID). Add Reviewer Notes entry.
DEPMove to archive section with deprecation notice.
REMRemove and note in Reviewer Notes.

Key rules:

  • REQ IDs are immutable — never renumber
  • Every REQ needs: Statement, Acceptance Criteria, Traceability (Source)
  • Run python3 docusaurus/scripts/add-req-anchors.py --dry-run to verify anchors

Deep dive: SRS Update Guide, SRS Authoring Guide

2b. SDS — Design Docs

Update docusaurus/docs/sds/ to match code changes:

Changed AreaUpdate File
Architecture/componentssds-03-architecture-overview.md
Database schemasds-04-data-architecture.md
Auth/permissionssds-05-security-architecture.md
Domain behaviorsds/domains/sds-domain-*.md
Rule logicsds/rules/sds-rules-*.md
API endpointssds/reference/sds-ref-api.md
Config optionssds/reference/sds-ref-config.md

Key rules:

  • Add {#anchor-name} for new sections
  • Update Mermaid diagrams if architecture changed
  • Update sds-master-index.md for new sections

Deep dive: SDS Update Guide, SDS Authoring Guide

2c. STD — Test Specifications

For each new/modified REQ, update the corresponding docusaurus/docs/std/ file:

  • Add test vectors (TVs) for new requirements
  • Update TVs for modified requirements
  • Archive TVs for deprecated requirements
  • Coverage target: 100% REQ coverage, 80%+ AC coverage

Deep dive: STD Update Guide

2d. Code DocRefs

Update DocRef: comments in code entry points:

/**
* DocRef:
* - SRS: REQ-DOMAIN-NNN
* - SDS: sds-domain-xxx.md#anchor
*/
  • New code implementing new REQs → add DocRef block
  • Modified code → update existing DocRef if REQ scope changed
  • Removed code → remove orphaned DocRef blocks

Deep dive: Code DocRef Update

Human gate: Review all doc changes before proceeding to tests.


Phase 3: Write Tests

Implement the test vectors defined in Phase 2c as executable Behat scenarios.

Agent pattern

Use wave-based parallel agents. 8-10 agents per wave, one QR pass after each wave.

Test TypeGuideMax Parallel
API (Behat)Behat Creation Guide8-10 (limited by DB pool)
Browser (Mink)Browser Test Guide8 (limited by ports)

Per-wave cycle

1. Pre-assign: BT keys, DB slots, output filenames
2. Launch creation agents (one file per agent)
3. Wait for completion (never poll)
4. Launch QR agents (review tags, assertions, STD alignment)
5. Fix all WRONG items
6. Commit
7. Repeat until all TVs covered

Deep dive: Agent Orchestration Guide, STD Reconciliation Guide


Phase 4: Validate

This is where the two key scripts close the loop. Run them in order.

Step 1: Generate the Unified Traceability Matrix

python3 docusaurus/scripts/generate-unified-traceability.py --render-md

This ingests all sources and produces a single traceability JSON + markdown views:

SourceWhat it reads
SRS files (docs/srs/)REQ registry — every requirement ID
SDS files (docs/sds/)Design links — which SDS section covers which REQ
STD files (docs/std/)Test vectors — which TVs cover which REQ
code_tags.jsonCode traceability — which code files implement which REQ
feature_catalogue.jsonTest mapping — which Behat scenarios test which TV

Output:

  • tests/catalogue/unified-traceability.json — machine-readable source of truth
  • docusaurus/docs/traceability/unified-traceability-matrix.md — full TM view
  • docusaurus/docs/traceability/unified-coverage-report.md — coverage summary
  • docusaurus/docs/traceability/unified-release-checklist.md — release readiness

Any REQ with missing columns = a gap to fix.

Step 2: Run the full test suite + map results to TM

python3 tests/scripts/behat-optimizer.py run \
--workers 10 \
--pre-migrate \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md

This single command does everything:

  • Scans ~275 feature files, groups by config (~189 groups)
  • Migrates one template DB, clones to 10 pool DBs
  • Generates consolidated .feature files with @USE_SAME_CONFIG
  • Runs all ~760 scenarios across 10 parallel workers (~25-30 min)
  • Writes results.json + traceability-report.json to /tmp/behat-logs/
  • Prints the traceability report to terminal
  • Writes the markdown STR to the --render-md path
  • Cleans up consolidated files after

Step 3: Review the results

The traceability report shows:

  • Per-REQ: which tests cover it, pass/fail status
  • Gaps: REQs with no test coverage
  • Failures: tests that ran but didn't pass

Fix gaps and failures, then re-run Steps 1-2 until clean.

Step 4: Validate docs build

# Broken links (target: 0)
cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved"

# Mermaid diagrams (target: 0 errors)
bash docusaurus/scripts/validate-mermaid.sh docusaurus/docs

# Jira references (convert BT-*, CST-*, PCRAI-* to links)
./docusaurus/scripts/link-jira-refs.sh -n # dry-run first
./docusaurus/scripts/link-jira-refs.sh # apply

Optional: Regenerate report without re-running tests

python3 tests/scripts/behat-optimizer.py report \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md

Optional: JSON output for programmatic use

python3 tests/scripts/behat-optimizer.py report \
--traceability tests/catalogue/unified-traceability.json \
--json --output /tmp/behat-logs/traceability-report.json

Phase 5: Freeze Version

# 1. Create Docusaurus versioned snapshot
cd docusaurus && npm run docusaurus docs:version 3.1.0

# 2. Tag the repo
git tag v3.1.0-docs-complete

# 3. Update CLAUDE.md status section with new version info

Quick Reference: All Commands

WhatCommand
Convert Confluence exportdocusaurus/scripts/docx2adoc.sh input/file.docx input/file.adoc
Generate traceability matrixpython3 docusaurus/scripts/generate-unified-traceability.py --render-md
Validate traceabilitypython3 docusaurus/scripts/generate-unified-traceability.py --validate
Run full test suite + TM reportpython3 tests/scripts/behat-optimizer.py run --workers 10 --pre-migrate --traceability tests/catalogue/unified-traceability.json --render-md docusaurus/docs/traceability/str-release-test-report.md
Regenerate report (no re-run)python3 tests/scripts/behat-optimizer.py report --traceability tests/catalogue/unified-traceability.json --render-md docusaurus/docs/traceability/str-release-test-report.md
Check broken linkscd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved"
Validate Mermaid diagramsbash docusaurus/scripts/validate-mermaid.sh docusaurus/docs
Convert Jira refs to links./docusaurus/scripts/link-jira-refs.sh
Preview req anchor additionspython3 docusaurus/scripts/add-req-anchors.py --dry-run
Preview link fixespython3 docusaurus/scripts/fix-docusaurus-links.py --docs-root docusaurus/docs --dry-run

Quick Reference: behat-optimizer Flags

FlagWhat it does
--workers 10Use all 10 pool DBs in parallel
--pre-migrateMigrate once, clone schema to all DBs (fast)
--traceability PATHMap results to TM after run
--render-md PATHWrite markdown STR file
--dry-runPreview execution plan without running
--skip-list tests/scripts/behat-skip-list.jsonSkip superseded scenarios
--permanentKeep consolidated files after run
--log-dir /pathCustom log dir (default: /tmp/behat-logs)
--json --output PATHJSON output for programmatic use

Detailed Guides (Reference)

Only consult these when you need specifics for a particular step:

GuideWhen to read it
Version Update WorkflowFull 7-phase details with exit criteria
Change DetectionInventory extraction patterns, domain mapping
SRS Update GuideREQ ID rules, format, deprecation mechanics
SDS Update GuideDesign doc conventions, truth tables
STD Update GuideTest vector format, coverage targets
Code DocRef UpdateDocRef format, validation scripts
Agent OrchestrationParallel agents, DB pool, QR protocol, prompt templates
Validation ChecklistFull pre-release quality gates, sign-off template
Confluence Conversion6-stage Pandoc pipeline for .docx
Behat Creation40 gotchas, config pitfalls, subagent patterns
Browser TestsMink/Chrome, step defs, parallel execution
STD ReconciliationMapping TVs to existing Behat tests