Skip to main content
Version: 3.0.1

Software Test Plan

Version: v1.0.0 Status: Draft Scope: Test strategy for all SRS requirements


1. Introduction

1.1 Purpose

This document defines the test strategy for verifying all requirements in the Software Requirements Specification (SRS). It establishes:

  • Test methods and their appropriate use
  • Selection rules for choosing test methods
  • Coverage targets by domain
  • Entry and exit criteria

1.2 Scope

MetricCount
SRS Domains29
Rule Specifications61
Total Requirements~405
Total Acceptance Criteria~1,346

1.3 References

  • SRS Documents: docusaurus/docs/srs/*.md, docusaurus/docs/srs/rules/*.md
  • SDS Documents: docusaurus/docs/sds/
  • IEEE 829-2008: Standard for Software and System Test Documentation

2. Test Strategy

2.1 Test Levels

LevelScopeWhen Used
UnitIndividual functions, rule logicPure business logic, calculations
IntegrationComponent interactions, API boundariesBackend workflows, data flow
System (E2E)User workflows through UIUser-facing features
AcceptanceBusiness requirement validationFinal verification

2.2 Test Types

TypePurposeTools
FunctionalVerify behavior matches requirementsPHPUnit, Behat, Behat/Mink + headless Chrome (CDP)
RegressionEnsure changes don't break existing functionalityCI/CD pipelines
PerformanceVerify NFRs (response time, throughput)JMeter, custom scripts
SecurityVerify authentication, authorization, data protectionManual review, OWASP checks

3. Test Method Selection Policy (Normative)

Each acceptance criterion shall be verified using one or more of the test methods defined below.

3.1 Test Methods

Method IDMethod TypeDescription
TM-APIAutomated API TestVerifies backend behavior via API or service boundary. Includes unit tests and integration tests that don't require a browser.
TM-UIAutomated UI TestVerifies user-facing behavior via browser automation (Behat/Mink + headless Chrome via CDP).
TM-MANManual TestHuman-executed verification. Used when automation is impractical or verification requires human judgment.
TM-HYBHybridCombination of automated setup with manual verification, or automated verification with manual data preparation.

Constraint: Test methods describe the verification mechanism, not the execution tooling. Tool selection is implementation-specific and out of scope for this document.

3.2 Selection Rules

Requirement CharacteristicPreferred MethodFallbackRationale
Pure business logic (rules, calculations)TM-APITM-MANLogic is deterministic; API tests are fast and reliable
Backend workflow without UI dependencyTM-APITM-MANNo UI needed; test at service boundary
Configuration import/exportTM-APITM-MANFile I/O operations testable without UI
Data transformation/aggregationTM-APITM-MANBackend logic; results verifiable via API
User interaction flow (multi-step UI)TM-UITM-MANRequires browser; automation reduces regression risk
UI rendering, layout, visual affordancesTM-MANTM-UIHuman judgment often required for visual verification
Error message wording or UX clarityTM-MANTM-UISubjective assessment; automation checks existence only
Cross-system integrationTM-HYBTM-MANComplex setup; often needs manual data verification
Real-time updates (WebSocket/Pusher)TM-HYBTM-MANTiming-sensitive; may need manual observation
Permission/role-based accessTM-APITM-UITest at API level for speed; UI for user experience

3.3 Method Selection by Domain Type

Domain TypePrimary MethodSecondaryExample Domains
Rules (61)TM-APITM-MANAll rule-*.md files
Backend ServicesTM-APITM-HYBconfigio, fileimport, analyzer
UI FeaturesTM-UITM-MANnotif, filters, tables, globalui
ConfigurationTM-APITM-UIkitcfg, client-config, user-settings
ReportingTM-HYBTM-MANreports, runfile-report, print
User ManagementTM-APITM-UIuser-management, audit-log

3.4 Exceptions

Where the preferred method cannot be used due to technical limitations, the fallback method may be used. Such cases shall be explicitly documented in the domain STD file with:

  • Reason for deviation
  • Compensating controls (if any)
  • Plan to address (if applicable)

4. Coverage Targets

4.1 Default Targets

MetricTargetRationale
REQ Coverage100%Every requirement must have at least one test
AC Coverage80% minimumNot all ACs are independently testable

4.2 Domain-Specific Targets

Domain TypeREQ TargetAC TargetRationale
Rules100%95%Pure logic; highly testable
Backend Services100%85%Core functionality; most ACs testable
UI Features100%75%Visual ACs may require manual verification
Configuration100%90%Deterministic behavior; testable
NFR100%70%Some NFRs require production-like environment

4.3 Gap Acceptance Criteria

A gap is acceptable only when:

  1. The AC is untestable in isolation (e.g., "system shall be intuitive")
  2. Testing requires infrastructure not available (documented with remediation plan)
  3. The AC is redundant with another AC (documented with reference)

Gaps must be explicitly documented with owner and resolution plan.


5. Test Case Specification Format

5.1 Domain STD Files

Each domain STD file (std-{domain}.md) shall contain:

# STD: {Domain Name}

## Coverage Summary
| REQ ID | Title | ACs | Tests | Coverage | Gaps |
|--------|-------|-----|-------|----------|------|

## Test Cases

### TC-{DOMAIN}-NNN: {Descriptive Title}

**Verifies:** REQ-{DOMAIN}-NNN (AC1, AC2, ...)

**Method:** TM-API | TM-UI | TM-MAN | TM-HYB

**Priority:** Critical | High | Medium | Low

**Preconditions:**
- [Required system state]

**Test Data:**
- [Inputs and expected values]

**Steps:**
1. [Action]
2. [Verification]

**Expected Results:**
- [ ] AC1: [Specific outcome]
- [ ] AC2: [Specific outcome]

**Automation Status:** Automated | Manual | Planned

**Jira:** [Link to test ticket if exists]

5.2 Rule STD Files (Truth Table Format)

Rule STD files shall use decision tables instead of prose:

# STD: {Rule Name}

## Coverage Summary
| REQ ID | Title | Conditions | Test Vectors | Coverage |
|--------|-------|------------|--------------|----------|

## Decision Table: {Requirement}

### Inputs
| Variable | Type | Valid Values |
|----------|------|--------------|

### Test Vectors

| ID | Input1 | Input2 | ... | Expected Output | Covers |
|----|--------|--------|-----|-----------------|--------|
| TV-001 | value | value | ... | outcome | AC1 |

**Method:** TM-API

**Automation:** Parameterized test using above vectors

6. Entry and Exit Criteria

6.1 Entry Criteria

Testing may begin when:

  • SRS requirements are approved (status: Normative)
  • Test environment is available and configured
  • Test data is prepared or generation scripts are ready
  • Domain STD file is complete with test cases defined

6.2 Exit Criteria

Testing is complete when:

  • All test cases have been executed
  • REQ coverage target met (100%)
  • AC coverage target met (domain-specific)
  • All Critical and High priority failures resolved
  • All gaps documented with owner and plan
  • Test results recorded in STR (Software Test Report)

6.3 Suspension Criteria

Testing shall be suspended when:

  • Critical defect blocks further testing
  • Test environment becomes unavailable
  • Requirements change invalidates test cases

7. Test Environment

7.1 Hardware Requirements

ComponentSpecification
ServerPer deployment documentation
ClientModern browser (Chrome, Firefox, Safari)

7.2 Software Requirements

ComponentVersion
PHP8.x
MySQL8.x
Browser (E2E)Latest Chrome, Firefox

7.3 Test Data

  • Anonymized production subset for realistic scenarios
  • Synthetic data for edge cases and boundary conditions
  • Configuration fixtures for rule testing

8. Deliverables

DeliverableDescriptionLocation
STD Test PlanThis documentstd-test-plan.md
Domain STD FilesTest specifications per domaindocusaurus/docs/std/domains/std-{domain}.md
Rules STD FilesTruth tables for all rulesdocusaurus/docs/std/rules/std-rule-{rule}.md
Traceability MatrixREQ-to-Test mappingunified-traceability-matrix.md
Coverage ReportMetrics and gap analysisunified-coverage-report.md

9. Roles and Responsibilities

RoleResponsibility
Test AuthorCreate and maintain STD files
Test ExecutorExecute tests, record results
DeveloperFix defects, provide technical input
QA LeadReview coverage, approve exit criteria

Appendix A: Test Method Decision Tree

Is the requirement about pure logic/calculations?
├── Yes → TM-API (unit/integration test)
└── No
├── Does it require user interaction?
│ ├── Yes
│ │ ├── Is automation practical?
│ │ │ ├── Yes → TM-UI
│ │ │ └── No → TM-MAN
│ │ └── Does it require human judgment (visual, UX)?
│ │ └── Yes → TM-MAN
│ └── No
│ ├── Is it backend/API behavior?
│ │ └── Yes → TM-API
│ └── Is it cross-system integration?
│ └── Yes → TM-HYB

Appendix B: Mapping to Existing Test Assets

Test AssetLocationMaps To
PHPUnit Teststests/Unit/TM-API
Behat Featurestests/Behat/TM-API, TM-UI
Browser Tests (Behat/Mink)code/features/browser/TM-UI
Manual Test CasesJiraTM-MAN