Skip to main content
Version: 3.0.1

STD Update Guide

How to update Software Test Documentation when requirements change.

Reference Documents

  • Test Plan: docusaurus/docs/std/std-test-plan.md (authoritative policy)
  • Coverage Report: docusaurus/docs/traceability/unified-coverage-report.md
  • Traceability Matrix: docusaurus/docs/traceability/unified-traceability-matrix.md
  • Link Conventions: Link Conventions (merged into Contributor Guide)

STD File Locations

All STD files are in docusaurus/docs/std/ (91 files total):

System-Level (4 files)

FilePurpose
std-test-plan.mdTest strategy, method selection policy
std-introduction.mdSTD overview
unified-traceability-matrix.mdREQ to Test mapping (now in traceability/)
unified-coverage-report.mdCoverage metrics (now in traceability/)
std-ui-testability.mdUI testing guidance

Domain STDs (29 files in domains/)

File PatternExample
std-{domain}.mdstd-analytics.md, std-kitcfg.md, std-user-management.md

Rule STDs (60 files in rules/)

File PatternExample
std-rule-{rule-name}.mdstd-rule-westgards.md, std-rule-adj.md

Test Methods

Each test case specifies a method from the following (from std-test-plan.md):

Method IDMethod TypeDescription
TM-APIAutomated API TestBackend behavior via API or service boundary
TM-UIAutomated UI TestBrowser automation (Selenium, Playwright)
TM-MANManual TestHuman-executed verification
TM-HYBHybridAutomated setup with manual verification

Method Selection Rules

Requirement TypePreferred MethodFallback
Pure business logic (rules)TM-APITM-MAN
Backend workflowTM-APITM-MAN
Configuration import/exportTM-APITM-MAN
User interaction flowTM-UITM-MAN
UI rendering/layoutTM-MANTM-UI
Cross-system integrationTM-HYBTM-MAN
Permission/role-basedTM-APITM-UI

Coverage Targets

Default Targets

MetricTargetNotes
REQ Coverage100%Every requirement must have at least one test
AC Coverage80% minimumNot all ACs are independently testable

Domain-Specific Targets

Domain TypeREQ TargetAC Target
Rules100%95%
Backend Services100%85%
UI Features100%75%
Configuration100%90%
NFR100%70%

Test Case Format

Domain STD Format

# STD: {Domain Name}

## Coverage Summary

| REQ ID | Title | ACs | Tests | Coverage | Gaps |
|--------|-------|-----|-------|----------|------|
| REQ-DOMAIN-001 | Title | 3 | 2 | 67% | AC3 |

## Test Cases

### TC-{DOMAIN}-NNN: {Descriptive Title}

**Verifies:** REQ-{DOMAIN}-NNN (AC1, AC2, ...)

**Method:** TM-API | TM-UI | TM-MAN | TM-HYB

**Priority:** Critical | High | Medium | Low

**Preconditions:**
- [Required system state]

**Test Data:**
- [Inputs and expected values]

**Steps:**
1. [Action]
2. [Verification]

**Expected Results:**
- [ ] AC1: [Specific outcome]
- [ ] AC2: [Specific outcome]

**Automation Status:** Automated | Manual | Planned

**Jira:** [Link to test ticket if exists]

Rule STD Format (Decision Table)

Rule STDs use decision tables instead of prose:

# STD: {Rule Name}

## Coverage Summary

| REQ ID | Title | Conditions | Test Vectors | Coverage |
|--------|-------|------------|--------------|----------|

## Decision Table: {Requirement}

### Inputs

| Variable | Type | Valid Values |
|----------|------|--------------|
| input1 | string | "A", "B", "C" |
| input2 | boolean | true, false |

### Test Vectors

| ID | Input1 | Input2 | Expected Output | Covers |
|----|--------|--------|-----------------|--------|
| TV-001 | A | true | outcome1 | AC1 |
| TV-002 | B | false | outcome2 | AC2 |

**Method:** TM-API

**Automation:** Parameterized test using above vectors

Normative Constraints

These constraints govern STD creation and updates:

ConstraintRule
NC-1A test case covers multiple ACs only if they cannot fail independently
NC-2No tool references in STDs (tool selection is implementation-specific)
NC-3Rule STDs use decision tables, not prose

When to Update STD Files

New Requirements Added

  1. Create test cases for all new REQs
  2. Each AC should have test coverage (goal: 80%+)
  3. Add to Coverage Summary table
  4. Update unified-traceability-matrix.md

Requirements Modified

  1. Review affected test cases
  2. Update test data and expected results
  3. Add new test cases for new ACs
  4. Remove or archive tests for removed ACs

Regenerate Unified Traceability

After any STD changes (new test cases, modified vectors, archived tests), regenerate the unified traceability artifacts:

python3 docusaurus/scripts/generate-unified-traceability.py --render-md

This updates the unified traceability matrix, coverage report, SDS traceability view, and release checklist.

Requirements Deprecated

  1. Mark tests as deprecated (don't delete)
  2. Move to Archive section
  3. Update Coverage Summary

Adding Test Cases

Step 1: Identify Coverage Gaps

Review the Coverage Summary in the domain STD:

# Find STD file
ls docusaurus/docs/std/domains/std-{domain}.md

# Check current coverage
grep -A 20 "## Coverage Summary" docusaurus/docs/std/domains/std-{domain}.md

Step 2: Determine Test Method

Use the selection rules from std-test-plan.md:

  1. Is it pure logic? → TM-API
  2. Does it require UI? → TM-UI
  3. Does it require human judgment? → TM-MAN
  4. Is it cross-system? → TM-HYB

Step 3: Write Test Case

Follow the format above. Key points:

  • One test per logical scenario (not one test per AC)
  • Explicit expected results (not "works correctly")
  • Documented preconditions (what state must exist)

Step 4: Update Coverage Summary

Add/update the row in the Coverage Summary table.


Implementing Test Cases as Behat Scenarios

After writing STD test case specifications, the next step is implementing them as executable Behat scenarios. This bridges the gap between "documented test" and "automated test."

When to Implement

  • New test vectors added to STD files → create Behat scenarios
  • Modified test vectors → update existing Behat scenarios
  • STD Automation Status = "Planned" → target for implementation

API Tests (TM-API)

For pure-logic test vectors that don't require a UI:

  1. Create fixture files and feature file following the Behat Test Creation Guide
  2. Use the iterative strategy: create → dry-run → minimal assertions → check actuals → update → re-run
  3. Tag scenarios with @TV-RULE-NNN-NNN matching the STD test vector IDs

Browser Tests (TM-UI)

For test vectors requiring UI interaction:

  1. Create feature files following the Browser Test Guide
  2. Tag scenarios with @TC-DOMAIN-NNN-ACNN matching the STD test case IDs

Parallel Execution

For large batches (>10 new test vectors), use wave-based parallel agent execution. See Agent Orchestration Guide for:

  • Wave planning and resource allocation
  • DB pool management
  • QR (Quality Review) protocol

Coverage Reconciliation

Use the STD Reconciliation Guide to map existing Behat tests to STD test vectors and identify remaining gaps.

After implementation, update the Automation Status field in the STD test case from "Planned" to "Automated."


Updating Rule STDs

Rule STDs require special attention:

Decision Tables

Every rule with 2+ conditions needs a decision table:

  1. List all input variables
  2. Enumerate valid value combinations
  3. Map each combination to expected output
  4. Identify which ACs each vector covers

Edge Cases

Include vectors for:

  • Boundary values
  • Invalid inputs (if applicable)
  • Default/fallback cases
  • Precedence scenarios (when multiple conditions match)

Example

For a rule with inputs wellType and outcome:

IDwellTypeoutcomeExpected ResultCovers
TV-001SamplePositiveReportAC1
TV-002SampleNegativeSuppressAC2
TV-003ControlPositiveFlagAC3
TV-004ControlNegativePassAC4

Quality Checklist

Before committing STD changes:

  • All new REQs have test cases
  • Coverage targets met (100% REQ, 80%+ AC)
  • Test method specified for each test
  • Expected results are specific and testable
  • Rule STDs use decision tables
  • Coverage Summary updated
  • Traceability matrix updated
  • No tool-specific references (NC-2)
  • Behat scenarios created/updated for new test vectors (or flagged as "Planned")

Cross-Reference Format

To SRS Documents

[REQ-KITCFG-001](../../srs/kitcfg.md#req-kitcfg-001)

To SDS Documents

[SDS: Kit Configuration](../../sds/domains/sds-domain-kitcfg.md)

Within STD

[std-test-plan.md](../std-test-plan.md)
[TC-KITCFG-001](#tc-kitcfg-001)

Validation

Coverage Check

# Count REQs in SRS
grep -roh "REQ-[A-Z]*-[0-9]*" docusaurus/docs/srs/*.md docusaurus/docs/srs/rules/*.md | sort -u | wc -l

# Count REQs in STD coverage
grep -roh "REQ-[A-Z]*-[0-9]*" docusaurus/docs/std/domains/*.md docusaurus/docs/std/rules/*.md | sort -u | wc -l
cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved"  # Should be 0

Common Patterns

Adding Tests for New Feature

  1. Review SRS requirements
  2. Create test cases in domain STD
  3. Update Coverage Summary
  4. Update unified-traceability-matrix.md

Updating Tests for Modified Requirement

  1. Find existing test cases for the REQ
  2. Update steps and expected results
  3. Add new tests for new ACs
  4. Archive obsolete test vectors

Archiving Tests for Deprecated Requirement

  1. Move test case to ## Archive section
  2. Add deprecation note with version and date
  3. Update Coverage Summary (remove row or mark deprecated)

Creating Behat Tests for New Test Vectors

  1. Write test cases in STD file (this guide)
  2. Reconcile with existing coverage (STD Reconciliation Guide)
  3. Create API Behat scenarios (Behat Creation Guide)
  4. Create browser Behat scenarios (Browser Test Guide)
  5. QR pass on new scenarios (Agent Orchestration Guide)
  6. Update Automation Status and Coverage Summary