Skip to main content
Version: 3.0.1

Version Update Workflow

Master guide for updating all documentation when transitioning between product versions.

Overview

This workflow ensures all documentation stays synchronized when creating a new version (e.g., v3.0 → v3.1). It covers SRS (requirements), SDS (design), STD (tests), and code-to-requirement mappings.


Phase 0: Confluence Ingestion (Conditional)

Goal: Convert raw Confluence exports into structured input for the update pipeline.

When to run: Only if the new version includes requirements documented in Confluence that haven't been converted to SRS format yet. Skip this phase if all changes come from code diffs and existing structured docs.

Reference: See SRS Confluence Conversion Guide for the full 6-stage pipeline.

Pipeline:

Confluence Export (.docx) → Pandoc+Lua (.adoc) → Source Analysis → Classification → Transformation → Validation

Key Steps:

  1. Export page tree from Confluence as Word (.docx) — not PDF, not HTML
  2. Run docusaurus/scripts/docx2adoc.sh input.docx output.adoc to convert to AsciiDoc (preserves tables that Markdown breaks)
  3. Inventory and classify content using SRS Authoring Guide taxonomy
  4. Transform into canonical SRS format
  5. Validate with Confluence Validation Spec (44 automated checks)
  6. Integrate into docusaurus/docs/srs/

Why AsciiDoc? Confluence exports contain complex tables (merged cells, nested content). Pandoc's AsciiDoc writer preserves table structure faithfully. Pandoc's Markdown writer collapses it.

Output: Structured SRS files ready for the Impact Analysis phase.


Phase 0.5: Code Upgrade

Goal: Get the actual new-version code running locally so all subsequent phases work against real behavior.

When to run: Always. You need v{NEW} code to analyze changes, run tests, and verify documentation accuracy.

Pre-Flight Checklist

Before touching any code, complete these steps in order:

  1. Commit all local changes — Run git status and commit everything. During the v3.0.1 upgrade, uncommitted local changes made it difficult to distinguish agent contamination from intentional modifications.
  2. Back up .env — Copy code/.env to code/.env.test-baseline before any changes. This preserves Cognito vars, DB config, Pusher keys, and other environment-specific settings that are easy to lose during a merge.
  3. Inventory local modifications — List all files with local changes beyond upstream (see Known Local Changes below). These files require manual merges rather than simple overwrites.
  4. Verify gh CLI access — Confirm you can fetch upstream files: gh api repos/ORG/REPO/contents/README.md?ref=v{NEW} -H 'Accept: application/vnd.github.raw' > /dev/null

Known Local Changes to Merge

These files have local modifications that must be preserved during upgrade. Do NOT overwrite them blindly from upstream:

FileLocal ChangeMerge Strategy
code/features/bootstrap/FeatureContext.phpBrowser test step definitions (ours) + upstream new stepsManual merge — keep our steps, add theirs
code/features/bootstrap/BrowserContext.phpEntirely ours — browser test infrastructureDo not overwrite (not in upstream)
code/composer.json / code/composer.lockBrowser testing packages (dmore/chrome-mink-driver, etc.)Merge dependency changes carefully
code/database/seeders/UsersTableSeeder.phplogged_in_site_id fix for strict :string returnCheck if upstream fixed this; if not, preserve our fix
code/.envCognito env vars, Pusher keys, DB config for browser testsNever overwrite — merge new vars into existing
code/behat.ymlBrowser suite configurationMerge if upstream modifies

Process

  1. Fetch files from upstream tag

    # Fetch individual files (preserves directory structure)
    gh api 'repos/ORG/REPO/contents/PATH?ref=v{NEW}' \
    -H 'Accept: application/vnd.github.raw' > local/PATH

    # Or fetch commit list to understand what changed
    git log v{OLD}..v{NEW} --oneline
  2. Merge local changes — If you have local modifications (e.g., FeatureContext.php with custom browser steps), merge carefully:

    • Identify files modified locally vs. upstream
    • For shared files (e.g., FeatureContext.php), merge both sets of changes
    • For upstream-only files, overwrite local copies
  3. Frontend rebuild (if frontend files changed)

    cd code/resources/frontend && npm install && npm run build
  4. Run migrations across all pool DBs

    for i in $(seq -w 1 10); do
    DB_HOST=127.0.0.1 DB_AUDIT_HOST=127.0.0.1 DB_DATABASE=pcrai_test_$i \
    DB_USERNAME=sail DB_PASSWORD=password \
    php artisan migrate:fresh \
    --path=database/migrations \
    --path=database/migrations/app \
    --path=database/migrations/audit --seed
    done
  5. Code contamination audit — Verify all fetched files match upstream exactly. Watch for agent-added debug code, extra docblocks, or stale local modifications that weren't cleaned up.

  6. Create .env.test-baseline — Backup of working test environment config before any further changes:

    cp code/.env code/.env.test-baseline
  7. Update test infrastructure — After code is upgraded:

    • Update KNOWN_VERSION_TAGS and LATEST_VERSION_TAG in tests/scripts/behat-optimizer.py to include the new version tag (e.g., V3_0_1)
    • Run php artisan migrate:fresh --path=database/migrations --path=database/migrations/app --path=database/migrations/audit --seed on ALL pool DBs (01-10), not just one
    • Rebuild frontend if any JS/Vue files changed: cd code/resources/frontend && npm install && npm run build
    • Verify no code contamination: compare all fetched files against upstream tag content using gh api to spot agent-added debug code, extra docblocks, or stale local edits

Lesson learned (Session 35): The code upgrade is a prerequisite for all subsequent phases. Without v{NEW} code, change detection operates on stale diffs, test runs use old behavior, and documentation risks describing the wrong version.

Lesson learned (Session 36): Code contamination is a real risk. During v3.0.1 upgrade, 3 PHP files contained agent-added debug file_put_contents() calls and 3 debug log files (22MB) were created. Always run a contamination audit after agent-driven code changes.


Phase 1: Change Detection

Goal: Identify all changes between versions and produce a Change Manifest.

Inputs:

  • PRD or product specification updates
  • Ideas/instructions documents
  • Code diffs between version tags
  • Test diffs (when STD is established)

Process:

  1. Review PRD/Ideas

    • Check input/ for new specification documents
    • Review any stakeholder change requests
    • Document new features, modified behaviors, deprecations
  2. Analyze Code Changes

    # Get high-level diff stats
    git diff v{OLD}..v{NEW} --stat

    # Backend changes
    git diff v{OLD}..v{NEW} -- app/

    # Frontend changes
    git diff v{OLD}..v{NEW} -- resources/js/

    # Commit history
    git log v{OLD}..v{NEW} --oneline
  3. Review Test Changes

    • Identify new, modified, or removed Behat feature files
    • Check for changed test fixtures (tests/support_files/)
    • Note any new @TV- or @TC- tags in feature files
    git diff v{OLD}..v{NEW} -- tests/
    git diff v{OLD}..v{NEW} -- code/features/

Output: Change Manifest (see Change Detection for template)


Phase 2: Impact Analysis

Goal: Map each change to affected documentation.

Process:

  1. For each item in Change Manifest, determine:

    • Which SRS domain(s) are affected
    • Which SDS section(s) are affected
    • Which STD test plans are affected
    • Whether change is cross-cutting (security, audit, config)
  2. Create Impact Matrix:

Change IDTypeDescriptionSRS ImpactSDS ImpactSTD Impact
CHG-001NEWNew export formatfileimport.mdsds-domain-fileimport.mdstd-fileimport.md
CHG-002MODAuth flow changeuser-management.mdsds-05-security-architecture.mdstd-user-management.md
CHG-003DEPLegacy API removedclient-config.mdsds-ref-api.mdstd-client-config.md
  1. Identify cross-cutting concerns:
    • Security changes → sds-05-security-architecture.md + affected SRS domains
    • Config changes → sds-ref-config.md + kitcfg.md/client-config.md
    • Audit changes → sds-domain-audit.md + audit-log.md
    • API changes → sds-ref-api.md + affected SRS domains

Phase 3: SRS Updates

Goal: Update requirements documentation.

Reference: See SRS Update Guide for detailed instructions.

Summary of Actions:

Change TypeAction
NEW featureAdd new REQ-DOMAIN-NNN using SRS authoring guide
MODIFIED behaviorUpdate existing requirement (preserve ID)
DEPRECATED featureMove to archive section with deprecation note

Key Rules:

  • REQ IDs are immutable - never change an existing ID
  • Document all changes in Reviewer Notes section
  • Update Traceability Matrix for each change
  • Add Implementation section if code exists

Phase 4: SDS Updates

Goal: Update design documentation.

Reference: See SDS Update Guide for detailed instructions.

Mapping:

SDS FileUpdate For
sds-03-architecture-overview.mdComponent changes, system boundaries
sds-04-data-architecture.mdERD, schema changes
sds-05-security-architecture.mdAuth, permissions, token handling
sds-06-cross-cutting.mdLogging, errors, infrastructure
sds-domain-*.mdDomain-specific behavior changes
sds-rules-*.mdRule logic, precedence changes
sds-ref-api.mdAPI endpoint changes
sds-ref-database.mdSchema documentation
sds-ref-config.mdConfiguration options
sds-ref-glossary.mdNew terms

Key Rules:

  • Add {#anchor-name} for new sections
  • Update cross-references between SDS files
  • Maintain Mermaid diagrams if architectural changes
  • Update Implementation Mapping in domain docs

Phase 5: STD Updates

Goal: Update test documentation.

Reference: See STD Update Guide for detailed instructions.

Actions:

  • Add test cases for new requirements
  • Update test cases for modified requirements
  • Archive tests for deprecated requirements
  • Update traceability matrix
  • Verify coverage targets (100% REQ, 80%+ AC)

5b. Regenerate Unified Traceability

After completing STD updates, regenerate the unified traceability artifacts:

python3 docusaurus/scripts/generate-unified-traceability.py --render-md

This refreshes the unified traceability JSON, coverage report, traceability matrix, SDS traceability view, and release checklist.

5c. Implement Behat Tests

After STD specifications are updated, implement the test vectors as executable Behat scenarios.

Reference: See Behat Test Creation Guide for the full creation workflow, gotchas, and subagent patterns.

Process:

  1. Reconcile coverage — Map new/changed TVs to existing Behat tests using STD Reconciliation Guide
  2. Create API tests — For TM-API test vectors, create Behat feature files with fixtures (see Behat Creation Guide)
  3. Create browser tests — For TM-UI test vectors, create Behat/Mink scenarios (see Browser Test Guide)
  4. QR pass — Run quality review on all new scenarios (see Agent Orchestration Guide)
  5. Update dashboard — Update V3 Testing Dashboard with new test counts

Orchestration: For large batches (>10 new test vectors), use wave-based parallel execution. See Agent Orchestration Guide for wave planning, DB pool management, and QR protocol.

5d. Version Tagging for Tests

When behavior changes between versions, tests must be tagged so the correct set runs per deployment.

Process:

  1. Identify behavior changes — For each changed requirement, determine whether the old and new behaviors are both valid (different clients may run different versions)
  2. Create version-split scenarios:
    • Copy the original scenario with a @V{OLD} tag (e.g., @V3_0_0) and old assertions
    • Update the original with a @V{NEW} tag (e.g., @V3_0_1) and new assertions
    • Both versions share the same BT key, TV tags, and fixture files — only assertions differ
  3. Universal tests — Tests that pass on ALL versions get NO version tag
  4. Tag format: @V3_0_0, @V3_0_1 (underscores, not dots)
  5. Mirror changes to new_tests/ directory (git subtree tracking)

Running version-specific tests:

# Run only tests valid for a specific version
python3 behat-optimizer.py run --target-version 3.0.1 --suite all

5e. Regression Testing

After implementing new/changed Behat tests, run a full regression to catch breakage.

Process:

  1. Full suite with version filter:

    python3 behat-optimizer.py run --target-version X.Y.Z --suite all --rerun-failures
  2. Classify failures:

    • Pre-existing: Already tagged @KNOWN_CODE_ISSUE or @KNOWN_LIMITATION — verify still valid
    • New regressions: Caused by v{NEW} code changes — investigate and fix
    • Flaky: Pass on rerun — typically Chrome/CDP transient issues
    • Environment: DB pool, artisan serve, missing migrations — fix and rerun
  3. KCI/KL audit: Check if existing @KNOWN_CODE_ISSUE or @KNOWN_LIMITATION tags are now stale (bugs fixed in v{NEW})

  4. Browser test regression:

    • Ensure .env has correct Cognito vars, Pusher keys, and PHP_CLI_SERVER_WORKERS=8
    • The optimizer's rerun_browser_failures handles transient Chrome/CDP flakes
  5. Exit criteria: All new failures either fixed, tagged KCI/KL with justification, or classified as pre-existing

Lesson learned (Session 35): The v3.0.0→v3.0.1 upgrade revealed 30 new failures across 2,496 scenarios. Classifying them into categories (19 WREP edited-wells, 4 flaky INHN/INHP, 5 pre-existing resolution guard, etc.) was essential for triage.


Phase 6: Code Mapping Sync

Goal: Maintain bidirectional code-to-docs traceability.

Reference: See Code Mapping Update for detailed instructions.

Registry: tests/catalogue/code_tags.json — single source of truth for code-to-requirement mappings.

Actions:

  1. For new code implementing new requirements:

    • Add entry in code_tags.json (all three views: by_file, by_srs, by_sdd)
    • Add Implementation section to SRS file
  2. For modified code:

    • Update existing code_tags.json entries if REQ scope changed
    • Update Implementation sections in SRS
  3. For removed code:

    • Remove orphaned entries from code_tags.json (all three views)
    • Update Implementation sections (mark as deprecated)

Phase 6b: Regenerate Unified Traceability

After all doc updates are complete, regenerate the unified traceability JSON and views:

python3 docusaurus/scripts/generate-unified-traceability.py --render-md

This updates:

  • tests/catalogue/unified-traceability.json — source of truth
  • 4 generated MD views in docs/std/ and docs/sds/traceability/

Phase 7: Validation & Finalization

Goal: Verify completeness and consistency.

Reference: See Validation Checklist for full checklist.

Validation Scripts:

# 1. Check for orphan SRS refs (mapped in code_tags.json but not defined in SRS)
jq -r '.by_srs | keys[]' tests/catalogue/code_tags.json | sort -u > /tmp/code-refs.txt
grep -roh "REQ-[A-Z]*-[0-9]*" docusaurus/docs/srs/*.md docusaurus/docs/srs/rules/ | sort -u > /tmp/srs-ids.txt
comm -23 /tmp/code-refs.txt /tmp/srs-ids.txt

# 2. Check for consistency between code_tags.json views
jq -r '.by_srs | to_entries[] | .value[]' tests/catalogue/code_tags.json | sort -u > /tmp/srs-files.txt
jq -r '.by_file | keys[]' tests/catalogue/code_tags.json | sort -u > /tmp/byfile-files.txt
comm -23 /tmp/srs-files.txt /tmp/byfile-files.txt # Should be empty

# 3. Check for empty entries in code_tags.json
jq '.by_file | to_entries[] | select(.value.srs == [] and .value.sdd == []) | .key' tests/catalogue/code_tags.json

Finalization Steps:

  1. Update version headers in all modified docs
  2. Update CLAUDE.md with new version status
  3. Create transition notes documenting any incomplete work
  4. Consider creating version tag: git tag v{NEW}-docs-complete

7b. Release Artifacts

After validation passes, produce release-specific artifacts:

  1. Update VERSION constant in docusaurus/scripts/generate-unified-traceability.py (line 64) to match v{NEW}
  2. Create release notes: docusaurus/docs/release-notes-v{NEW}.md — user-facing changelog summarizing new features, behavior changes, and known issues
  3. Generate STR (Software Test Report):
    python3 behat-optimizer.py report \
    --traceability /path/to/unified-traceability.json \
    --render-md /path/to/str-release-test-report.md
  4. Freeze docs as versioned snapshot:
    cd docusaurus && npm run docusaurus docs:version {NEW}
    This creates versioned_docs/version-{NEW}/ and updates versions.json.
  5. Verify versions.json lists the new version correctly

SprintFocusParallelizableAgent Pattern
0Phase 0: Confluence Ingestion (if needed)Partially — by domain sectionSee orchestration guide
0.5Phase 0.5: Code UpgradeNoSingle orchestrator (merge + migrate)
1Phase 1-2: Change Detection + Impact AnalysisNoSingle orchestrator
2Phase 3: SRS Updates (domains 1-10)Yes — by domainOne agent per domain
3Phase 3: SRS Updates (domains 11-20+)Yes — by domainOne agent per domain
4Phase 4: SDS UpdatesPartially — independent sectionsOne agent per section
5Phase 5: STD UpdatesYes — by test planOne agent per STD file
5cPhase 5c: Behat Test ImplementationYes — wave-based8-10 per wave + QR pass
5dPhase 5d: Version TaggingYes — by feature fileOne agent per split
5ePhase 5e: Regression TestingYes — by suite8-10 parallel (DB pool)
6Phase 6: Code Mapping SyncYes — by code moduleOne agent per module
7Phase 7: Validation + Finalization + ReleaseNoSequential validation

Exit Criteria

Version update is complete when:

  • Code upgraded to v{NEW} and migrations applied (Phase 0.5)
  • .env.test-baseline created
  • Change Manifest created and reviewed
  • All SRS files updated for changes
  • All SDS files updated for changes
  • All STD files updated for changes
  • Behat tests created/updated for new/changed test vectors
  • Version-split scenarios created where behavior differs between versions
  • QR pass completed — all WRONG items fixed
  • Full regression run with --target-version — all failures classified
  • Stale KCI/KL tags audited and removed where bugs are fixed in v{NEW}
  • Testing dashboard updated with new test counts
  • Code mapping validation passes (no orphans in code_tags.json, views consistent)
  • Unified traceability regenerated and verified
  • generate-unified-traceability.py VERSION constant updated
  • Release notes created
  • Docusaurus version freeze completed (docs:version)
  • CLAUDE.md status updated
  • Transition notes created for any incomplete work
  • All validation scripts pass