Software Test Plan
Version: v1.0.0 Status: Draft Scope: Test strategy for all SRS requirements
1. Introduction
1.1 Purpose
This document defines the test strategy for verifying all requirements in the Software Requirements Specification (SRS). It establishes:
- Test methods and their appropriate use
- Selection rules for choosing test methods
- Coverage targets by domain
- Entry and exit criteria
1.2 Scope
| Metric | Count |
|---|---|
| SRS Domains | 29 |
| Rule Specifications | 61 |
| Total Requirements | ~405 |
| Total Acceptance Criteria | ~1,346 |
1.3 References
- SRS Documents:
docusaurus/docs/srs/*.md,docusaurus/docs/srs/rules/*.md - SDS Documents:
docusaurus/docs/sds/ - IEEE 829-2008: Standard for Software and System Test Documentation
2. Test Strategy
2.1 Test Levels
| Level | Scope | When Used |
|---|---|---|
| Unit | Individual functions, rule logic | Pure business logic, calculations |
| Integration | Component interactions, API boundaries | Backend workflows, data flow |
| System (E2E) | User workflows through UI | User-facing features |
| Acceptance | Business requirement validation | Final verification |
2.2 Test Types
| Type | Purpose | Tools |
|---|---|---|
| Functional | Verify behavior matches requirements | PHPUnit, Behat, Behat/Mink + headless Chrome (CDP) |
| Regression | Ensure changes don't break existing functionality | CI/CD pipelines |
| Performance | Verify NFRs (response time, throughput) | JMeter, custom scripts |
| Security | Verify authentication, authorization, data protection | Manual review, OWASP checks |
3. Test Method Selection Policy (Normative)
Each acceptance criterion shall be verified using one or more of the test methods defined below.
3.1 Test Methods
| Method ID | Method Type | Description |
|---|---|---|
| TM-API | Automated API Test | Verifies backend behavior via API or service boundary. Includes unit tests and integration tests that don't require a browser. |
| TM-UI | Automated UI Test | Verifies user-facing behavior via browser automation (Behat/Mink + headless Chrome via CDP). |
| TM-MAN | Manual Test | Human-executed verification. Used when automation is impractical or verification requires human judgment. |
| TM-HYB | Hybrid | Combination of automated setup with manual verification, or automated verification with manual data preparation. |
Constraint: Test methods describe the verification mechanism, not the execution tooling. Tool selection is implementation-specific and out of scope for this document.
3.2 Selection Rules
| Requirement Characteristic | Preferred Method | Fallback | Rationale |
|---|---|---|---|
| Pure business logic (rules, calculations) | TM-API | TM-MAN | Logic is deterministic; API tests are fast and reliable |
| Backend workflow without UI dependency | TM-API | TM-MAN | No UI needed; test at service boundary |
| Configuration import/export | TM-API | TM-MAN | File I/O operations testable without UI |
| Data transformation/aggregation | TM-API | TM-MAN | Backend logic; results verifiable via API |
| User interaction flow (multi-step UI) | TM-UI | TM-MAN | Requires browser; automation reduces regression risk |
| UI rendering, layout, visual affordances | TM-MAN | TM-UI | Human judgment often required for visual verification |
| Error message wording or UX clarity | TM-MAN | TM-UI | Subjective assessment; automation checks existence only |
| Cross-system integration | TM-HYB | TM-MAN | Complex setup; often needs manual data verification |
| Real-time updates (WebSocket/Pusher) | TM-HYB | TM-MAN | Timing-sensitive; may need manual observation |
| Permission/role-based access | TM-API | TM-UI | Test at API level for speed; UI for user experience |
3.3 Method Selection by Domain Type
| Domain Type | Primary Method | Secondary | Example Domains |
|---|---|---|---|
| Rules (61) | TM-API | TM-MAN | All rule-*.md files |
| Backend Services | TM-API | TM-HYB | configio, fileimport, analyzer |
| UI Features | TM-UI | TM-MAN | notif, filters, tables, globalui |
| Configuration | TM-API | TM-UI | kitcfg, client-config, user-settings |
| Reporting | TM-HYB | TM-MAN | reports, runfile-report, print |
| User Management | TM-API | TM-UI | user-management, audit-log |
3.4 Exceptions
Where the preferred method cannot be used due to technical limitations, the fallback method may be used. Such cases shall be explicitly documented in the domain STD file with:
- Reason for deviation
- Compensating controls (if any)
- Plan to address (if applicable)
4. Coverage Targets
4.1 Default Targets
| Metric | Target | Rationale |
|---|---|---|
| REQ Coverage | 100% | Every requirement must have at least one test |
| AC Coverage | 80% minimum | Not all ACs are independently testable |
4.2 Domain-Specific Targets
| Domain Type | REQ Target | AC Target | Rationale |
|---|---|---|---|
| Rules | 100% | 95% | Pure logic; highly testable |
| Backend Services | 100% | 85% | Core functionality; most ACs testable |
| UI Features | 100% | 75% | Visual ACs may require manual verification |
| Configuration | 100% | 90% | Deterministic behavior; testable |
| NFR | 100% | 70% | Some NFRs require production-like environment |
4.3 Gap Acceptance Criteria
A gap is acceptable only when:
- The AC is untestable in isolation (e.g., "system shall be intuitive")
- Testing requires infrastructure not available (documented with remediation plan)
- The AC is redundant with another AC (documented with reference)
Gaps must be explicitly documented with owner and resolution plan.
5. Test Case Specification Format
5.1 Domain STD Files
Each domain STD file (std-{domain}.md) shall contain:
# STD: {Domain Name}
## Coverage Summary
| REQ ID | Title | ACs | Tests | Coverage | Gaps |
|--------|-------|-----|-------|----------|------|
## Test Cases
### TC-{DOMAIN}-NNN: {Descriptive Title}
**Verifies:** REQ-{DOMAIN}-NNN (AC1, AC2, ...)
**Method:** TM-API | TM-UI | TM-MAN | TM-HYB
**Priority:** Critical | High | Medium | Low
**Preconditions:**
- [Required system state]
**Test Data:**
- [Inputs and expected values]
**Steps:**
1. [Action]
2. [Verification]
**Expected Results:**
- [ ] AC1: [Specific outcome]
- [ ] AC2: [Specific outcome]
**Automation Status:** Automated | Manual | Planned
**Jira:** [Link to test ticket if exists]
5.2 Rule STD Files (Truth Table Format)
Rule STD files shall use decision tables instead of prose:
# STD: {Rule Name}
## Coverage Summary
| REQ ID | Title | Conditions | Test Vectors | Coverage |
|--------|-------|------------|--------------|----------|
## Decision Table: {Requirement}
### Inputs
| Variable | Type | Valid Values |
|----------|------|--------------|
### Test Vectors
| ID | Input1 | Input2 | ... | Expected Output | Covers |
|----|--------|--------|-----|-----------------|--------|
| TV-001 | value | value | ... | outcome | AC1 |
**Method:** TM-API
**Automation:** Parameterized test using above vectors
6. Entry and Exit Criteria
6.1 Entry Criteria
Testing may begin when:
- SRS requirements are approved (status: Normative)
- Test environment is available and configured
- Test data is prepared or generation scripts are ready
- Domain STD file is complete with test cases defined
6.2 Exit Criteria
Testing is complete when:
- All test cases have been executed
- REQ coverage target met (100%)
- AC coverage target met (domain-specific)
- All Critical and High priority failures resolved
- All gaps documented with owner and plan
- Test results recorded in STR (Software Test Report)
6.3 Suspension Criteria
Testing shall be suspended when:
- Critical defect blocks further testing
- Test environment becomes unavailable
- Requirements change invalidates test cases
7. Test Environment
7.1 Hardware Requirements
| Component | Specification |
|---|---|
| Server | Per deployment documentation |
| Client | Modern browser (Chrome, Firefox, Safari) |
7.2 Software Requirements
| Component | Version |
|---|---|
| PHP | 8.x |
| MySQL | 8.x |
| Browser (E2E) | Latest Chrome, Firefox |
7.3 Test Data
- Anonymized production subset for realistic scenarios
- Synthetic data for edge cases and boundary conditions
- Configuration fixtures for rule testing
8. Deliverables
| Deliverable | Description | Location |
|---|---|---|
| STD Test Plan | This document | std-test-plan.md |
| Domain STD Files | Test specifications per domain | docusaurus/docs/std/domains/std-{domain}.md |
| Rules STD Files | Truth tables for all rules | docusaurus/docs/std/rules/std-rule-{rule}.md |
| Traceability Matrix | REQ-to-Test mapping | unified-traceability-matrix.md |
| Coverage Report | Metrics and gap analysis | unified-coverage-report.md |
9. Roles and Responsibilities
| Role | Responsibility |
|---|---|
| Test Author | Create and maintain STD files |
| Test Executor | Execute tests, record results |
| Developer | Fix defects, provide technical input |
| QA Lead | Review coverage, approve exit criteria |
Appendix A: Test Method Decision Tree
Is the requirement about pure logic/calculations?
├── Yes → TM-API (unit/integration test)
└── No
├── Does it require user interaction?
│ ├── Yes
│ │ ├── Is automation practical?
│ │ │ ├── Yes → TM-UI
│ │ │ └── No → TM-MAN
│ │ └── Does it require human judgment (visual, UX)?
│ │ └── Yes → TM-MAN
│ └── No
│ ├── Is it backend/API behavior?
│ │ └── Yes → TM-API
│ └── Is it cross-system integration?
│ └── Yes → TM-HYB
Appendix B: Mapping to Existing Test Assets
| Test Asset | Location | Maps To |
|---|---|---|
| PHPUnit Tests | tests/Unit/ | TM-API |
| Behat Features | tests/Behat/ | TM-API, TM-UI |
| Browser Tests (Behat/Mink) | code/features/browser/ | TM-UI |
| Manual Test Cases | Jira | TM-MAN |