STD Update Guide
How to update Software Test Documentation when requirements change.
Reference Documents
- Test Plan:
docusaurus/docs/std/std-test-plan.md(authoritative policy) - Coverage Report:
docusaurus/docs/traceability/unified-coverage-report.md - Traceability Matrix:
docusaurus/docs/traceability/unified-traceability-matrix.md - Link Conventions: Link Conventions (merged into Contributor Guide)
STD File Locations
All STD files are in docusaurus/docs/std/ (91 files total):
System-Level (4 files)
| File | Purpose |
|---|---|
std-test-plan.md | Test strategy, method selection policy |
std-introduction.md | STD overview |
unified-traceability-matrix.md | REQ to Test mapping (now in traceability/) |
unified-coverage-report.md | Coverage metrics (now in traceability/) |
std-ui-testability.md | UI testing guidance |
Domain STDs (29 files in domains/)
| File Pattern | Example |
|---|---|
std-{domain}.md | std-analytics.md, std-kitcfg.md, std-user-management.md |
Rule STDs (60 files in rules/)
| File Pattern | Example |
|---|---|
std-rule-{rule-name}.md | std-rule-westgards.md, std-rule-adj.md |
Test Methods
Each test case specifies a method from the following (from std-test-plan.md):
| Method ID | Method Type | Description |
|---|---|---|
| TM-API | Automated API Test | Backend behavior via API or service boundary |
| TM-UI | Automated UI Test | Browser automation (Selenium, Playwright) |
| TM-MAN | Manual Test | Human-executed verification |
| TM-HYB | Hybrid | Automated setup with manual verification |
Method Selection Rules
| Requirement Type | Preferred Method | Fallback |
|---|---|---|
| Pure business logic (rules) | TM-API | TM-MAN |
| Backend workflow | TM-API | TM-MAN |
| Configuration import/export | TM-API | TM-MAN |
| User interaction flow | TM-UI | TM-MAN |
| UI rendering/layout | TM-MAN | TM-UI |
| Cross-system integration | TM-HYB | TM-MAN |
| Permission/role-based | TM-API | TM-UI |
Coverage Targets
Default Targets
| Metric | Target | Notes |
|---|---|---|
| REQ Coverage | 100% | Every requirement must have at least one test |
| AC Coverage | 80% minimum | Not all ACs are independently testable |
Domain-Specific Targets
| Domain Type | REQ Target | AC Target |
|---|---|---|
| Rules | 100% | 95% |
| Backend Services | 100% | 85% |
| UI Features | 100% | 75% |
| Configuration | 100% | 90% |
| NFR | 100% | 70% |
Test Case Format
Domain STD Format
# STD: {Domain Name}
## Coverage Summary
| REQ ID | Title | ACs | Tests | Coverage | Gaps |
|--------|-------|-----|-------|----------|------|
| REQ-DOMAIN-001 | Title | 3 | 2 | 67% | AC3 |
## Test Cases
### TC-{DOMAIN}-NNN: {Descriptive Title}
**Verifies:** REQ-{DOMAIN}-NNN (AC1, AC2, ...)
**Method:** TM-API | TM-UI | TM-MAN | TM-HYB
**Priority:** Critical | High | Medium | Low
**Preconditions:**
- [Required system state]
**Test Data:**
- [Inputs and expected values]
**Steps:**
1. [Action]
2. [Verification]
**Expected Results:**
- [ ] AC1: [Specific outcome]
- [ ] AC2: [Specific outcome]
**Automation Status:** Automated | Manual | Planned
**Jira:** [Link to test ticket if exists]
Rule STD Format (Decision Table)
Rule STDs use decision tables instead of prose:
# STD: {Rule Name}
## Coverage Summary
| REQ ID | Title | Conditions | Test Vectors | Coverage |
|--------|-------|------------|--------------|----------|
## Decision Table: {Requirement}
### Inputs
| Variable | Type | Valid Values |
|----------|------|--------------|
| input1 | string | "A", "B", "C" |
| input2 | boolean | true, false |
### Test Vectors
| ID | Input1 | Input2 | Expected Output | Covers |
|----|--------|--------|-----------------|--------|
| TV-001 | A | true | outcome1 | AC1 |
| TV-002 | B | false | outcome2 | AC2 |
**Method:** TM-API
**Automation:** Parameterized test using above vectors
Normative Constraints
These constraints govern STD creation and updates:
| Constraint | Rule |
|---|---|
| NC-1 | A test case covers multiple ACs only if they cannot fail independently |
| NC-2 | No tool references in STDs (tool selection is implementation-specific) |
| NC-3 | Rule STDs use decision tables, not prose |
When to Update STD Files
New Requirements Added
- Create test cases for all new REQs
- Each AC should have test coverage (goal: 80%+)
- Add to Coverage Summary table
- Update
unified-traceability-matrix.md
Requirements Modified
- Review affected test cases
- Update test data and expected results
- Add new test cases for new ACs
- Remove or archive tests for removed ACs
Regenerate Unified Traceability
After any STD changes (new test cases, modified vectors, archived tests), regenerate the unified traceability artifacts:
python3 docusaurus/scripts/generate-unified-traceability.py --render-md
This updates the unified traceability matrix, coverage report, SDS traceability view, and release checklist.
Requirements Deprecated
- Mark tests as deprecated (don't delete)
- Move to Archive section
- Update Coverage Summary
Adding Test Cases
Step 1: Identify Coverage Gaps
Review the Coverage Summary in the domain STD:
# Find STD file
ls docusaurus/docs/std/domains/std-{domain}.md
# Check current coverage
grep -A 20 "## Coverage Summary" docusaurus/docs/std/domains/std-{domain}.md
Step 2: Determine Test Method
Use the selection rules from std-test-plan.md:
- Is it pure logic? → TM-API
- Does it require UI? → TM-UI
- Does it require human judgment? → TM-MAN
- Is it cross-system? → TM-HYB
Step 3: Write Test Case
Follow the format above. Key points:
- One test per logical scenario (not one test per AC)
- Explicit expected results (not "works correctly")
- Documented preconditions (what state must exist)
Step 4: Update Coverage Summary
Add/update the row in the Coverage Summary table.
Implementing Test Cases as Behat Scenarios
After writing STD test case specifications, the next step is implementing them as executable Behat scenarios. This bridges the gap between "documented test" and "automated test."
When to Implement
- New test vectors added to STD files → create Behat scenarios
- Modified test vectors → update existing Behat scenarios
- STD Automation Status = "Planned" → target for implementation
API Tests (TM-API)
For pure-logic test vectors that don't require a UI:
- Create fixture files and feature file following the Behat Test Creation Guide
- Use the iterative strategy: create → dry-run → minimal assertions → check actuals → update → re-run
- Tag scenarios with
@TV-RULE-NNN-NNNmatching the STD test vector IDs
Browser Tests (TM-UI)
For test vectors requiring UI interaction:
- Create feature files following the Browser Test Guide
- Tag scenarios with
@TC-DOMAIN-NNN-ACNNmatching the STD test case IDs
Parallel Execution
For large batches (>10 new test vectors), use wave-based parallel agent execution. See Agent Orchestration Guide for:
- Wave planning and resource allocation
- DB pool management
- QR (Quality Review) protocol
Coverage Reconciliation
Use the STD Reconciliation Guide to map existing Behat tests to STD test vectors and identify remaining gaps.
After implementation, update the Automation Status field in the STD test case from "Planned" to "Automated."
Updating Rule STDs
Rule STDs require special attention:
Decision Tables
Every rule with 2+ conditions needs a decision table:
- List all input variables
- Enumerate valid value combinations
- Map each combination to expected output
- Identify which ACs each vector covers
Edge Cases
Include vectors for:
- Boundary values
- Invalid inputs (if applicable)
- Default/fallback cases
- Precedence scenarios (when multiple conditions match)
Example
For a rule with inputs wellType and outcome:
| ID | wellType | outcome | Expected Result | Covers |
|---|---|---|---|---|
| TV-001 | Sample | Positive | Report | AC1 |
| TV-002 | Sample | Negative | Suppress | AC2 |
| TV-003 | Control | Positive | Flag | AC3 |
| TV-004 | Control | Negative | Pass | AC4 |
Quality Checklist
Before committing STD changes:
- All new REQs have test cases
- Coverage targets met (100% REQ, 80%+ AC)
- Test method specified for each test
- Expected results are specific and testable
- Rule STDs use decision tables
- Coverage Summary updated
- Traceability matrix updated
- No tool-specific references (NC-2)
- Behat scenarios created/updated for new test vectors (or flagged as "Planned")
Cross-Reference Format
To SRS Documents
[REQ-KITCFG-001](../../srs/kitcfg.md#req-kitcfg-001)
To SDS Documents
[SDS: Kit Configuration](../../sds/domains/sds-domain-kitcfg.md)
Within STD
[std-test-plan.md](../std-test-plan.md)
[TC-KITCFG-001](#tc-kitcfg-001)
Validation
Coverage Check
# Count REQs in SRS
grep -roh "REQ-[A-Z]*-[0-9]*" docusaurus/docs/srs/*.md docusaurus/docs/srs/rules/*.md | sort -u | wc -l
# Count REQs in STD coverage
grep -roh "REQ-[A-Z]*-[0-9]*" docusaurus/docs/std/domains/*.md docusaurus/docs/std/rules/*.md | sort -u | wc -l
Link Verification
cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved" # Should be 0
Common Patterns
Adding Tests for New Feature
- Review SRS requirements
- Create test cases in domain STD
- Update Coverage Summary
- Update
unified-traceability-matrix.md
Updating Tests for Modified Requirement
- Find existing test cases for the REQ
- Update steps and expected results
- Add new tests for new ACs
- Archive obsolete test vectors
Archiving Tests for Deprecated Requirement
- Move test case to
## Archivesection - Add deprecation note with version and date
- Update Coverage Summary (remove row or mark deprecated)
Creating Behat Tests for New Test Vectors
- Write test cases in STD file (this guide)
- Reconcile with existing coverage (STD Reconciliation Guide)
- Create API Behat scenarios (Behat Creation Guide)
- Create browser Behat scenarios (Browser Test Guide)
- QR pass on new scenarios (Agent Orchestration Guide)
- Update Automation Status and Coverage Summary