Developer Testing Guide
1. Overview
Purpose: Reference manual for Behat step definitions, parameters, and workflow patterns Audience: Developers, QA Engineers, AI agents (as lookup reference)
This guide is the reference manual — step definitions, parameter tables, regex patterns, and complete workflow examples. For how to create tests (creation workflow, gotchas, config pitfalls, execution commands), see Behat Test Creation Guide.
2. Testing Stack
| Component | Technology | Location |
|---|---|---|
| BDD Framework | Behat 3.x | code/features/ |
| Step Definitions | PHP 8 Attributes | code/features/bootstrap/ |
| Test Runner | PHPUnit 12 | Via Behat integration |
| CI Pipeline | GitHub Actions | .github/workflows/behat.yml |
3. Step Definitions Reference
This section documents all 25 available step definitions organized by functional category.
3.1 Configuration Steps
Step 1: Load Configuration
Given The configuration :configFileName is loaded
Parameters:
configFileName: Filename including extension (e.g., "example.xlsx")
Description: Uploads a configuration sheet to initialize the test environment.
Shared Configuration Optimization:
Tag the Feature with @USE_SAME_CONFIG to share configuration across scenarios within a test execution. When used, subsequent configuration load calls will be skipped with warning: "this config import is ignored, instead using shared config."
Example:
Given The configuration "test-config.xlsx" is loaded
Step 25: Set Client Configuration Flag
Given the client configuration :configName is set to :configValue
Parameters:
configName: The client configuration key name (e.g.,use_sample_type,use_extraction_instruments)configValue: The value to set (e.g.,"true","false")
Description: Toggles a client configuration flag after config import. This is useful for testing runtime behavior that the import validator would otherwise block. For example, use_sample_type cannot be set to false at import time because SpecimenNameValidator rejects specimen-tagged rows, but this step can toggle it after a successful import.
Added: TI-003 resolution (Session 16)
Example:
Given The configuration "quest-v3.xlsx" is loaded
And the client configuration "use_sample_type" is set to "false"
When Upload the run file "run1.json"
3.2 Run File Steps
Step 2: Upload Run File
When Upload the run file :fileName
Parameters:
fileName: Filename including extension (e.g., "example.json")
Description: Uploads a JSON run file for processing.
Exceptions:
No Run found for run name: {run name}- When no such run file is uploaded
Example:
When Upload the run file "ENT_GAPD_134.json"
Step 3: Open Run File
When Open the run file :runName
Parameters:
runName: Name of run (from run_info), not the filename
Description: Opens an existing run file by its run name.
Exceptions:
No Run found for run name: {run name}- When no such run file is uploaded
Example:
When Open the run file "ENT_GAPD_134"
Step 7: Re-analyse Run File
When Re analyse the run file
Parameters: None
Description: Re-analyses an opened run file. Required after edits or resolutions to apply changes.
Exceptions:
No run file is open- When no run file is open prior to action
Example:
When Edit well "A1" with property "extraction_instrument" and value "QIASymphony"
And Re analyse the run file
Step 8: Assert Run Status
Then The run file :fileName should contains run status :runStatus
Parameters:
runName: Name of run (from run_info)runStatus: Expected resolution status
Description: Asserts the run file has the expected resolution status.
Exceptions:
No Run found for run name: {run name}- When no such run file is uploaded
Step 22: Archive Run File
When Archive the :fileName
Parameters:
fileName: Name of the run file to archive
Description: Archives a run file.
Preconditions:
- Run must exist
- Run must be accessible to user
- Archive tag must be available
Example:
When Archive the "ENT_GAPD_134.json"
3.3 Well Edit Steps
Step 4: Edit Well Property
When Edit well :wellNumber with property :property and value :value
Parameters:
wellNumber: Well identifier (e.g., "A1", "B2")property: Property name to edit (see table below)value: New value for the property
Description: Edits a property of a specific well.
Preconditions:
- A run file must be opened before this action
- The specified well must exist in the run
- The specified property must be in available properties
- The run must be re-analysed to affect the modification
Available Properties:
| Property | Validation Rules |
|---|---|
extraction_instrument | Validates against role (non-extraction wells cannot have e-instrument). Must be existing Extraction Instrument Name. |
accession | Well must be a patient well (cannot edit for Control well). Must be string, max 191 characters. |
is_crossover | Must be a boolean value (true, false, 0, 1) |
crossover_role | Well must be a crossover well. Must be name of existing Role, max 191 characters. |
extraction_date | "Allow Edit Extraction Information" must be enabled in Client Configuration. Well must be an Extraction Well. Date cannot precede runfile created date. |
sample_role | Must be valid role alias from role_mappings table for the well's mix. |
Exceptions:
No run file is openWell {well number} does not exist in the run.Editing :property is not supportedCannot edit extraction instrument for non extraction wellExtraction Instrument 'ABC' not foundCannot edit accession for Control wellinvalid accessionThe crossover value must be a booleaninvalid crossover role valueOnly crossover well can edit Crossover RolesCrossover Role 'ABC' not foundNot allowed Edit Extraction Information in Client ConfigurationsWell C1 is not an extraction wellExtraction Date cannot precede runfile created dateSample role 'example' is not a valid sample role.
Example:
When Open the run file "ENT_GAPD_134_2.json"
And Edit well "A1" with property "extraction_instrument" and value "QIASymphony"
And Re analyse the run file
3.4 Resolution Steps
Step 5: Apply Resolution to Well
When Apply resolution to well :wellNumber with :resolution
Parameters:
wellNumber: Well identifier (e.g., "A1", "B1")resolution: Use dropdown message if resolution is a dropdown, otherwise resolution message
Description: Applies a resolution to a well. Run must be re-analysed to affect the modification.
Exceptions:
No run file is openWell {well number} does not exist in the run.Resolution is not allowed for the selected well
Example:
When Open the run file "example.json"
And Apply resolution to well "B1" with "RXT all"
And Re analyse the run file
Step 6: Apply Resolution with Individual Curve Result
When Apply resolution to well :wellNumber with :resolution and :type to observation :observation with :value
Parameters:
wellNumber: Well identifier (e.g., "A2")resolution: Resolution message (e.g., "Set individual curve results")type: Must be "Manual classification" or "Preferred CT provider"observation: Target/observation name (e.g., "HDV")value: Must be "Neg", "Pos", "MACHINE", or "DXAI"
Description: Applies a resolution with individual curve result. Resolution must have type DISCREPANT OBSERVATIONS. Only 1 observation is supported to resolve.
Exceptions:
No run file is openWell {well number} does not exist in the run.Observation '{target name}' does not exist in the Well '{well number}'.Resolution is not allowed for the selected wellIndividual Curve results are not allowed with selected resolution: '{resolution message}'Incorrect observation curve resolution type: '{type}'Incorrect observation curve resolution value: '{value}'
Warning: 'Two Step Manage Result Workflow' is Selected! But not supported. 'One step workflow' is used instead. - shown when Two Step Workflow is selected
Example:
When Open the run file "example.json"
And Apply resolution to well "A2" with "Set individual curve results" and "Manual classification" to observation "HDV" with "Neg"
And Re analyse the run file
3.5 Well Assertion Steps
Step 9: Assert Well Outcome
Then well :wellNumber should have :outcomeMessage outcome
Parameters:
wellNumber: Well identifieroutcomeMessage: Outcome message (not code), e.g., "Detected", "Positive", "Negative"
Example:
Then well "A1" should have "Detected" outcome
Step 10: Assert Well Mix
Then well :wellNumber should have :mixName mix
Parameters:
wellNumber: Well identifiermixName: Name of the mix (e.g., "HEV")
Example:
Then well "A1" should have "HEV" mix
Step 11: Assert Well Is Crossover
Then well :wellNumber should have :isCrossover is crossover
Parameters:
wellNumber: Well identifierisCrossover: Must be boolean ("true" or "false")
Example:
Then well "C3" should have "false" is crossover
Step 12: Assert Well Crossover Role
Then well :wellNumber should have :crossoverRoleAlias is crossover role
Parameters:
wellNumber: Well identifiercrossoverRoleAlias: Role name (e.g., "CC1")
Example:
Then well "C3" should have "CC1" is crossover role
Step 13: Assert Well Extraction Date
Then well :wellNumber should have :extractionDate extraction date
Parameters:
wellNumber: Well identifierextractionDate: Date value (e.g., "20240901")
Example:
Then well "C3" should have "20240901" extraction date
Step 14: Assert Well Sample Role
Then well :wellNumber should have :sampleRole sample role
Parameters:
wellNumber: Well identifiersampleRole: role_alias from role_to_target_mappings (e.g., "patient")
Example:
Then well "C3" should have "patient" sample role
Step 15: Assert Well Sample Name
Then well :wellNumber should have :sampleName sample name
Parameters:
wellNumber: Well identifiersampleName: role_alias for controls, accession for patients
Example:
Then well "C3" should have "123" sample name
Step 16: Assert Well Extraction Instrument
Then well :wellNumber should have :extractionInstrumentName extraction instrument
Parameters:
wellNumber: Well identifierextractionInstrumentName: Name from extraction instrument (e.g., "E01")
Example:
Then well "A1" should have "E01" extraction instrument
Step 17: Assert Well Batch Number
Then well :wellNumber should have :batchNumber batch number
Parameters:
wellNumber: Well identifierbatchNumber: Batch number of well
Example:
Then well "A1" should have "1" batch number
Step 18: Assert Well Specimen Name
Then well :wellNumber should have :specimenName specimen name
Parameters:
wellNumber: Well identifierspecimenName: Name of specimen (e.g., "plasma")
Example:
Then well "A1" should have "plasma" specimen name
Step 23: Assert Well Accession
Then well :wellNumber should have :accession accession
Parameters:
wellNumber: Well identifieraccession: Expected accession value for the well
Example:
Then well "A1" should have "ACC-12345" accession
Notes: Asserts the accession property of a well matches the expected value. Only applicable to patient wells (control wells do not have accessions). Useful for verifying accession edits via Step 4 and for ANALYZER-related testing where accession validation is significant.
Common Exceptions for Well Assertion Steps:
No run file is open- When no run file is open prior to actionWell {well number} does not exist in the run.- When specified well doesn't exist
3.6 Observation Assertion Steps
Step 19: Assert Observation Final Classification
Then well :wellNumber observation :targetName should have :finalCls final cls
Parameters:
wellNumber: Well identifiertargetName: Target/observation namefinalCls: Final classification value (e.g., "Neg", "Pos")
Example:
Then well "A1" observation "T1" should have "Neg" final cls
Step 20: Assert Observation Final Ct
Then well :wellNumber observation :targetName should have :finalCt final ct
Parameters:
wellNumber: Well identifiertargetName: Target/observation namefinalCt: Final Ct value (e.g., "31")
Example:
Then well "A1" observation "T1" should have "31" final ct
Step 21: Assert Observation Quantity
Then well :wellNumber observation :targetName should have :quantity quantity
Parameters:
wellNumber: Well identifiertargetName: Target/observation namequantity: Full quantity value (e.g., "4234234")
Example:
Then well "A1" observation "T1" should have "4234234" quantity
Preconditions for Observation Assertion Steps:
- A run file must be opened before this action
- The specified well must exist in the run
- The specified target name must exist within the well as observation
Exceptions:
No run file is openWell {well number} does not exist in the run.Observation 'T1' does not exist in the Well 'A1'.
3.7 Run Target Assertion Steps
Step 24: Assert Run Target Error
Then run target :targetName in mix :mixName should have :errorCode target error
Parameters:
targetName: Target name within the run target (e.g., "NOR1", "IC")mixName: Mix name associated with the target (e.g., "NOR1", "HEV")errorCode: Expected error code string (e.g., "BAD_EFFICIENCY", "INSUFFICIENT_STANDARD_CONTROLS")
Example:
Then run target "NOR1" in mix "NOR1" should have "BAD_EFFICIENCY" target error
Notes: Critical for ERRORCODES gap testing. Reads error_codes from RunTarget and asserts a specific error code is present. The step looks up run targets by matching both target.target_name and target.mix.mix_name, then checks the error_codes array for the specified code.
Preconditions:
- A run file must be opened before this action
- The specified target must exist in the run's
run_targetsfor the given mix
Exceptions:
No run file is openRun target '{targetName}' for mix '{mixName}' does not exist in the run.Run target '{targetName}' in mix '{mixName}' does not have '{errorCode}' target error. Actual target errors: [...]
4. Gherkin Syntax Conventions
4.1 Feature File Structure
@FEATURE_TAG_1 @FEATURE_TAG_2
Feature: Internal operations
In order to stay secret
As a secret organization
We need to be able to erase past agents' memory
Background:
Given there is agent "A"
And there is agent "B"
Scenario: Erasing agent memory
Given there is agent "J"
And there is agent "K"
When I erase agent "K"'s memory
Then there should be agent "J"
But there should not be agent "K"
Scenario Outline: Erasing other agents' memory
Given there is agent "<agent1>"
And there is agent "<agent2>"
When I erase agent "<agent2>"'s memory
Then there should be agent "<agent1>"
But there should not be agent "<agent2>"
Examples:
| agent1 | agent2 |
| D | M |
4.2 Scenario vs Scenario Outline
- Scenario: Use for single test cases with fixed data
- Scenario Outline: Use when repeating the same steps with different data values
4.3 Using Datasets (Examples)
- Include a dataset when repeating tests with multiple example data (2+ rows)
- Do NOT include dataset when you have only 1 row (no repeat tests needed)
Scenario Outline: Verify outcome for different wells
When Upload the run file "example.json"
Then well <well> should have <outcome> outcome
Examples:
| well | outcome |
| A1 | Positive |
| A2 | Negative |
5. Jira Integration
| Jira Element | Gherkin Element |
|---|---|
| Test execution card title | Feature |
| Test card title | Scenario or Scenario Outline |
| Dataset | Examples |
| Execution card labels | Feature Tags |
IMPORTANT: Do NOT include Feature: or Scenario: keywords in Jira test cards - they are added automatically during export.
6. Feature Tags
| Tag | Required | Level | Effect |
|---|---|---|---|
@REQ_BT-xxxx | Yes | Feature | Links all scenarios to a Jira requirement |
@TEST_BT-xxxx | Yes | Scenario | Links scenario to a specific Jira test case |
@USE_SAME_CONFIG | Recommended | Feature (line 1) | Share configuration across all scenarios. Must be on line 1 of the file, before Feature:. Scenario-level placement is silently ignored. All-or-nothing: when present, only the first config import runs; all others are skipped. Files with multiple configs must NOT use this tag — split into separate files instead. |
@COMBINED_OUTCOME | No | Scenario | Test involves outcomes across multiple runs/mixes |
@UNIQUE | No | Scenario | Test uses unique/isolated test data |
@UNIVERSAL | No | Scenario | Test applies universally across configurations |
@EXAMPLE_TEST | No | Scenario | Example/demonstration test (not in core regression) |
@V3_0_0 | Conditional | Scenario | Scenario with assertions correct for v3.0.0 behavior. Excluded when running latest version (default). |
@V3_0_1 | Conditional | Scenario | Scenario with assertions correct for v3.0.1 behavior. Excluded when running --target-version=3.0.0. |
Tag Usage Example:
@REQ_BT-4051 @USE_SAME_CONFIG
Feature: Combined outcomes for multiple mixes
@TEST_BT-5707 @COMBINED_OUTCOME
Scenario Outline: Combined outcome in two runs
Given The configuration "Viracor_PROD.xlsx" is loaded
When Upload the run file "<runFile1>"
And Open the run file "<runFile1>"
Then well "A1" should have "<outcome>" outcome
Examples:
| runFile1 | outcome |
| NORO_123.json | Not Detected |
| NORO_124.json | Detected |
Tag Combinations:
@REQ_BT-4051 @USE_SAME_CONFIG
Feature: Combined outcomes for multiple mixes
# All scenarios below use the same config — @USE_SAME_CONFIG on line 1
@TEST_BT-5707 @REQ_BT-4070
Scenario Outline: Test with multiple requirements
Given The configuration "Viracor_PROD.xlsx" is loaded # ← LOADED (first)
...
@TEST_BT-5300
Scenario Outline: Shared config test
Given The configuration "Viracor_PROD.xlsx" is loaded # ← SKIPPED (reused)
...
@TEST_BT-1234 @UNIQUE @UNIVERSAL
Scenario: Rare universal edge case
Given The configuration "Viracor_PROD.xlsx" is loaded # ← SKIPPED (reused)
...
@USE_SAME_CONFIG placed on a scenario line is silently ignored — the code in BaseFeatureContext.php:122 only checks feature-level tags. If you need different configs, use separate .feature files.
6.1 Multi-Version Testing
This is a multi-tenant medical device SaaS where clients run different versions on independent upgrade cycles. The test suite must support running against any deployed version to investigate client bug reports.
Tagging scheme:
| Tag | Meaning |
|---|---|
| (no version tag) | Universal — scenario passes on all supported versions |
@V3_0_0 | Assertions correct for v3.0.0 behavior only |
@V3_0_1 | Assertions correct for v3.0.1 behavior only |
Rules:
- Tests that pass on all versions get no version tag (the default/majority case).
- When behavior changes between versions, both versions of the test exist with the same BT key, TV tags, and fixture files — only the assertions differ.
- The
--target-versionflag onbehat-optimizer.pycontrols which version set runs:--target-version=3.0.0excludes@V3_0_1scenarios (runs v3.0.0 + universal)--target-version=3.0.1excludes@V3_0_0scenarios (runs v3.0.1 + universal)- Default (no flag) = latest version = excludes
@V3_0_0
Creating version-split scenarios:
- Copy the scenario (or Scenario Outline row) with the current assertions
- Tag the copy
@V3_0_0 - Update the original with the new version's assertions
- Tag the original
@V3_0_1 - Both copies share the same
@TEST_BT-xxxxkey and@TV-*tags
Example:
@TEST_BT-5658 @COMBINED_OUTCOME @UNIVERSAL @V3_0_0
Scenario: SYSINH combined outcome (v3.0.0)
# ... assertions for v3.0.0 behavior ...
@TEST_BT-5658 @COMBINED_OUTCOME @UNIVERSAL @V3_0_1
Scenario: SYSINH combined outcome (v3.0.1)
# ... assertions for v3.0.1 behavior (e.g., archive dependency guard) ...
Running for a specific version:
# Run tests for v3.0.0 deployment
python3 tests/scripts/behat-optimizer.py run --workers 10 --target-version=3.0.0
# Run tests for latest version (default)
python3 tests/scripts/behat-optimizer.py run --workers 10
7. Test Organization
7.1 Directory Structure
code/
├── features/
│ ├── bootstrap/
│ │ ├── BaseFeatureContext.php # Base context with hooks
│ │ └── FeatureContext.php # Step definitions
│ └── *.feature # Gherkin feature files
├── behat.yml # Behat configuration
└── .github/workflows/behat.yml # CI configuration
7.2 Test File Organization
- Feature files:
code/features/*.feature - Context classes:
code/features/bootstrap/
7.3 Feature File Naming Convention
{priority}_{JIRA_KEY(s)}.feature
Examples:
1_BT-5855,BT-5854,BT-5842,BT-5840,BT-5839(+218).feature11_BT-5201.feature
7.4 Support Files
Each test scenario typically requires:
- Configuration file (
.xlsx) - Kit configuration insupport_files/or shared configs - Run file(s) (
.json) - PCR run data insupport_files/BT-xxxx/
Ensure support files are placed in the correct directory matching the Jira issue key. See the Run File Schema Guide for JSON run file structure.
7.5 File Duplication Rule
IMPORTANT: Each test (e.g., BT5001) needs its own folder with all required files. Even if a file is reused for multiple tests (e.g., BT5001, BT5002, BT5003), it must be duplicated in each directory.
Example: All files for test BT5001 should be in Test Files / Behat / BT5001.
8. Test Repetition Guidelines
-
Same steps + different data = Use same test with a new dataset row, and update scenario description to be broader to cover the dataset
-
Different steps (e.g., upload two runs, check results as you progress, then reanalyse the first one and check results) = New test
9. Execution
For full execution commands (environment variables, DB pool, parallel execution), see Behat Test Creation Guide.
CI Pipeline: Tests run automatically via GitHub Actions on push/PR. See .github/workflows/behat.yml.
Regression Testing Best Practice: Run the same final regression Behats multiple times in parallel prior to marking regression tests as complete. This isolates transient issues and avoids false passes.
10. Bug Reporting Requirements
-
Bug reports must include standalone tests: Any application bugs reported to Dev should provide the minimum Behat Test Jira issue cards to fix and ensure the feature is working correctly. These cards should be standalone and should NOT require a particular Execution issue to be used to recreate.
-
Report Behat system failures: If the issue cannot be recreated with a standalone test, but does recreate with the Execution, this indicates a failure with the Behat testing system and should be reported with the execution and standalone cards.
11. Parameter Reference
This section provides exhaustive reference tables for values used in step definition parameters. For domain terminology, see the Testing Data Dictionary.
11.1 Well Positions
Wells are identified by a letter (A-H) followed by a number (1-12), representing positions on a standard 96-well PCR plate.
A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A11 A12
B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12
E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 E12
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12
Common Usage Patterns:
- Column 1-2: Often standards/controls (S2, S4, S6, NEC, POS)
- Remaining columns: Patient samples
- Row G-H: Sometimes additional controls or problematic samples in test scenarios
11.2 Resolution Types
Resolutions are applied to wells with errors to specify how to handle them.
Basic Resolutions
| Resolution Type | Description | Common Use Case |
|---|---|---|
Re-extract all samples | Re-extract and re-amplify all samples | Extraction control failure, contamination |
Re-amplify all samples | Re-amplify without re-extraction | Amplification failure, control failures |
Re-amplify positive samples | Re-amplify only positive detections | Standards failure with positive patient samples |
Exclude this well from being exported | Mark well to skip export | Permanent exclusion, invalid data |
Westgard Rule Overrides
| Resolution Type | Description |
|---|---|
Ignore 12S error | Override Westgard 1:2S rule failure |
Ignore 13S error | Override Westgard 1:3S rule failure |
Ignore 22S error | Override Westgard 2:2S rule failure |
Ignore 22S13S error | Override combined Westgard 2:2S and 1:3S rule failure |
Advanced Resolutions
| Resolution Type | Parameters | Description |
|---|---|---|
Set individual curve results with Manual classification | Observation, Value (Pos/Neg) | Override classification for discrepancy |
Set individual curve results with Preferred CT provider | Observation, Provider | Select CT source for discrepancy |
11.3 Outcome Values
Outcomes represent the final result/status of a well.
Success Outcomes
| Outcome | Meaning |
|---|---|
Detected | Target detected |
Not Detected | Target not detected (negative) |
Not detected | Variant capitalization |
Not detected (AUTO) | Auto-classified negative |
Control Passed | Control well passed QC |
Control Passed. | Variant with period |
Crossover Passed | Crossover validation passed |
Pass | Generic pass status |
Quantitative Outcomes
| Outcome | Meaning |
|---|---|
Detected:<430 | Detected with quantity less than 430 |
96,700 | Quantity value |
49,200 | Quantity value |
9,999 | Formatted quantity |
999,999 | Formatted quantity |
Review/Action Required Outcomes
| Outcome | Meaning |
|---|---|
RPT | Repeat/re-amplify required |
RXT | Re-extract required |
Fail | Well failed |
Inconclusive | Result inconclusive |
Inconclusive (AUTO) | Auto-classified inconclusive |
Result is inconclusive | Variant phrasing |
Result is inconclusive (AUTO) | Auto-classified variant |
Well excluded | Well manually excluded |
Flag as High CT - do not export without review | High CT warning |
Error Outcomes - Control Failures
| Outcome | Category |
|---|---|
The positive control has failed. | Positive control failure |
NEC failure. | Negative extraction control failure |
A previous Westgard control has failed for this mix. | Historical Westgard failure |
Error Outcomes - Westgard Rules
| Outcome | Rule |
|---|---|
This control has failed the Westgard 1:2S rule. | 1 control > 2 SD |
This control has failed the Westgard 1:3S rule. | 1 control > 3 SD |
This control has failed the Westgard 2:2S rule. | 2 consecutive controls > 2 SD |
This control has failed the Westgard 1:3S 2:2S rule. | Combined 1:3S and 2:2S |
This control has failed the Westgard 7T rule. | 7 consecutive on same side of mean |
This control has failed the Westgard 7T and 1:3S rule. | Combined 7T and 1:3S |
Error Outcomes - Standards/Calibration
| Outcome | Meaning |
|---|---|
Standards have failed. The error must be resolved before further evaluation of wells | Standard curve failed |
In-run standards failed. | Standards in current run failed |
BAD_EFFICIENCY | PCR efficiency outside acceptable range |
BAD_GRADIENT | Standard curve gradient unacceptable |
INSUFFICIENT_STANDARD_CONTROLS | Not enough valid standards |
STANDARD_WITHOUT_QUANT | Standard missing quantification |
Error Outcomes - Inhibition
| Outcome | Meaning |
|---|---|
The IC is inhibited | Internal control inhibited |
Internal Control is inhibited. Report 'INHN'. | IC inhibition with report code |
SYSTEMIC_INHIBITON_DETECTED | Systemic inhibition across run |
RXT. Multiple internal control failures. Re-extract run. Possible systemic failure. | Multiple IC failures |
Cannot run IC rule due to no negative controls in run | IC rule validation impossible |
Error Outcomes - Discrepancies
| Outcome | Meaning |
|---|---|
There are one or more classification discrepancies. | Instrument vs pcr.ai mismatch |
Review required. The instrument amplification classification of this patient does not match the pcr.ai classification. | Patient discrepancy |
Review required. The instrument amplification classification of this control does not match the pcr.ai classification. | Control discrepancy |
Review required. The amplification classification of this well does not match its previous classification. | Repeat discrepancy - classification |
Review required. The quantity of this well is not within 0.5Log of its previous quantity. | Repeat discrepancy - quantity |
Error Outcomes - Associated Controls
| Outcome | Meaning |
|---|---|
An associated extraction control has failed. That error must be resolved before this well can be exported. | Linked extraction control error |
This well is missing the required associated extraction controls. Review required. | Missing extraction control link |
An associated control has failed. That error must be resolved before this well can be exported. | Generic associated control failure |
Error Outcomes - Missing Mixes
| Outcome | Mix Referenced |
|---|---|
Mixes missing or in error. Reanalyze this well after uploading NOR1 or resolving error | NOR1 |
Mixes missing or in error. Reanalyze this well after uploading NOR2 or resolving error | NOR2 |
Mixes missing or in error. Reanalyze this well after uploading GAPD or resolving error | GAPD |
Error Outcomes - Special Rules
| Outcome | Meaning |
|---|---|
NOR rule failure. Re-extract sample | Norovirus-specific rule failure |
'The control outside of the expected range. ' | Control quantity out of range |
'Multiple QC errors on run. ' | Multiple QC failures |
Well should be replated due to multiple qc errors. | Replating recommendation |
11.4 Mix Names
Mixes define the analyte/target assays tested in each well.
| Mix | Full Name | Category |
|---|---|---|
NOR1 | Norovirus Genogroup I | Viral |
NOR2 | Norovirus Genogroup II | Viral |
CMV | Cytomegalovirus | Viral |
BKV | BK Virus | Viral |
EBV | Epstein-Barr Virus | Viral |
HDV | Hepatitis D Virus | Viral |
HEV | Hepatitis E Virus | Viral |
EBVQ | EBV Quantitative | Viral (Quant) |
GAPD | GAPDH | Control/IC |
ENT | Enterovirus | Viral |
Notes:
- Most tests use one or two mixes per well
- Norovirus tests commonly use NOR1+NOR2 combination
- GAPD often serves as internal control
- "Q" suffix typically indicates quantitative assay variant
11.5 Sample Roles
Sample roles classify the purpose of each well.
| Role | Full Name | Description |
|---|---|---|
Patient | Patient Sample | Clinical specimen |
NEC | Negative Extraction Control | Extraction negative control |
POS | Positive Control | Positive amplification control |
HI POS | High Positive Control | High-level positive control |
LO POS | Low Positive Control | Low-level positive control |
S2 | Standard 2 | Calibration standard (level 2) |
S4 | Standard 4 | Calibration standard (level 4) |
S6 | Standard 6 | Calibration standard (level 6) |
CC1 | Crossover Control 1 | Crossover validation control |
CC2 | Crossover Control 2 | Crossover validation control |
Specialized Roles (EBV Tests):
| Role | Description |
|---|---|
EBVPC | EBV Positive Control |
EBVNC | EBV Negative Control |
QEBVLPC | EBV Quantitative Low Positive Control |
QEBVNC | EBV Quantitative Negative Control |
11.6 Run Status Values
Run-level status messages indicating overall run state.
Reanalysis Required Statuses
| Status | Trigger |
|---|---|
Reanalysis required (Missing mixes uploaded) | Missing mix subsequently uploaded |
Reanalysis required (Westgard) | Westgard rule resolution applied |
Reanalysis required (Missing mixes) | Waiting for missing mix upload |
Archive-Related Statuses
| Status | Export State |
|---|---|
Results for wells in this run may be affected by recently archived runs. Reanalysis required. ALL_WELLS_READY_FOR_EXPORT | All wells ready |
Results for wells in this run may be affected by recently archived runs. Reanalysis required. NO_EXPORT_ERRORS_TO_RESOLVE | No export blockers |
Results for wells in this run may be affected by recently archived runs. Reanalysis required. SOME_WELLS_READY_FOR_EXPORT_WITH_ERRORS_TO_RESOLVE | Partial export ready |
Edit-Related Statuses
| Status | Export State |
|---|---|
Results for wells in this run may be affected by recently edited wells. Reanalysis required. All wells ready for export | All wells ready |
Results for wells in this run may be affected by recently edited wells. Reanalysis required. No export - errors to resolve | Errors present |
Resolution-Related Statuses
| Status | Export State |
|---|---|
Results for wells in this run may be affected by recently resolved wells. Reanalysis required. All wells ready for export | All wells ready |
Results for wells in this run may be affected by recently resolved wells. Reanalysis required. No export - errors to resolve | Errors present |
Error Statuses
| Status | Meaning |
|---|---|
No Resolution No export - errors to resolve | Errors present, no resolution applied |
11.7 Common Observations/Targets
Observation names used in observation-level assertions.
| Observation | Description |
|---|---|
NOR1 | Norovirus GI |
NOR2 | Norovirus GII |
CMV | Cytomegalovirus |
BKV | BK Virus |
IC | Internal Control |
QIPC | Quantitative Internal Positive Control |
QBK | Quantitative BK |
QBKQ | BK Quantitative |
QMPXV | Monkeypox Virus Quantitative |
QOPXV | Orthopoxvirus Quantitative |
GAPDH | GAPDH control |
HDV | Hepatitis D Virus |
HEV | Hepatitis E Virus |
ZIKA | Zika Virus |
12. Workflow Patterns
Complete workflow examples demonstrating typical test patterns.
12.1 Basic Detection Test
Scenario: Upload a run file, verify patient sample detected, control passed
Scenario: Basic NOR1 Detection
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "NORO_101.json"
And Open the run file "NORO_101.json"
Then well "A1" should have "NOR1" mix
And well "A1" should have "Patient" sample role
And well "A1" observation "NOR1" should have "Pos" final cls
And well "A1" observation "NOR1" should have "31" final ct
And well "A1" should have "Detected" outcome
And well "G1" should have "POS" sample role
And well "G1" should have "Control Passed" outcome
Pattern Notes:
- Configuration loaded first
- Upload and open run file
- Verify well properties (mix, role)
- Check observation-level results (cls, ct)
- Assert final outcome
- Verify controls
12.2 Control Failure with Resolution
Scenario: Positive control fails, apply re-extract resolution, verify RXT outcome
Scenario: POS Control Failure Resolution
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "040122-0000234-NOR1NOR2.json"
And Open the run file "040122-0000234-NOR1NOR2.json"
Then well "G1" should have "NOR1" mix
And well "G1" should have "The positive control has failed." outcome
And well "H1" should have "An associated extraction control has failed. That error must be resolved before this well can be exported." outcome
When Apply resolution to well "G1" with "Re-extract all samples"
And Re analyse the run file
Then well "G1" should have "RXT" outcome
And well "H1" should have "RXT" outcome
Pattern Notes:
- Initial state shows control failure
- Associated wells show cascade error
- Resolution applied to root cause well
- Reanalysis triggered
- All affected wells now show RXT (re-extract) outcome
12.3 Missing Mix Multi-Run Workflow
Scenario: Combined outcome test with missing mix, upload missing mix, reanalyze
Scenario: Missing Mix NOR2 Upload
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "051222-0000443-NOR1NOR2.json"
And Open the run file "051222-0000443-NOR1NOR2.json"
Then well "H1" should have "NOR1" mix
And well "H1" should have "Mixes missing or in error. Reanalyze this well after uploading NOR2 or resolving error" outcome
When Upload the run file "051222-0000443B-NOR2.json"
Then The run file "051222-0000443-NOR1NOR2.json" should contains run status "Reanalysis required (Missing mixes uploaded)"
When Open the run file "051222-0000443-NOR1NOR2.json"
And Re analyse the run file
Then well "H1" should have "Not Detected" outcome
Pattern Notes:
- Initial upload incomplete (missing NOR2 mix)
- Well shows missing mix error
- Second run file uploaded with missing mix
- First run file status changes to reanalysis required
- Navigate back to first run
- Reanalyze
- Well now has final outcome
12.4 Westgard Rule Override
Scenario: Control fails Westgard 1:2S rule, override error
Scenario: Westgard 12S Override
Given The configuration "Viracor 2.19.0 Test.xlsx" is loaded
When Upload the run file "WESTGARD_12S.json"
And Open the run file "WESTGARD_12S.json"
Then well "A1" should have "CC1" sample role
And well "A1" should have "This control has failed the Westgard 1:2S rule." outcome
When Apply resolution to well "A1" with "Ignore 12S error"
And Re analyse the run file
Then well "A1" should have "Control Passed" outcome
Pattern Notes:
- Westgard failure detected
- Specific Westgard error message
- Matching resolution type applied
- Reanalysis clears error
- Control now passes
Appendix A: Context Class Reference
| Class | Purpose |
|---|---|
BaseFeatureContext | Base class with database refresh, authentication, file handling |
FeatureContext | Domain-specific step definitions for run file testing |
Appendix B: Step Definition Quick Reference
| # | Step | Category |
|---|---|---|
| 1 | Given The configuration :configFileName is loaded | Configuration |
| 2 | When Upload the run file :fileName | Run File |
| 3 | When Open the run file :runName | Run File |
| 4 | When Edit well :wellNumber with property :property and value :value | Well Edit |
| 5 | When Apply resolution to well :wellNumber with :resolution | Resolution |
| 6 | When Apply resolution to well :wellNumber with :resolution and :type to observation :observation with :value | Resolution |
| 7 | When Re analyse the run file | Run File |
| 8 | Then The run file :fileName should contains run status :runStatus | Run File |
| 9 | Then well :wellNumber should have :outcomeMessage outcome | Well Assertion |
| 10 | Then well :wellNumber should have :mixName mix | Well Assertion |
| 11 | Then well :wellNumber should have :isCrossover is crossover | Well Assertion |
| 12 | Then well :wellNumber should have :crossoverRoleAlias is crossover role | Well Assertion |
| 13 | Then well :wellNumber should have :extractionDate extraction date | Well Assertion |
| 14 | Then well :wellNumber should have :sampleRole sample role | Well Assertion |
| 15 | Then well :wellNumber should have :sampleName sample name | Well Assertion |
| 16 | Then well :wellNumber should have :extractionInstrumentName extraction instrument | Well Assertion |
| 17 | Then well :wellNumber should have :batchNumber batch number | Well Assertion |
| 18 | Then well :wellNumber should have :specimenName specimen name | Well Assertion |
| 19 | Then well :wellNumber observation :targetName should have :finalCls final cls | Observation Assertion |
| 20 | Then well :wellNumber observation :targetName should have :finalCt final ct | Observation Assertion |
| 21 | Then well :wellNumber observation :targetName should have :quantity quantity | Observation Assertion |
| 22 | When Archive the :fileName | Run File |
| 23 | Then well :wellNumber should have :accession accession | Well Assertion |
| 24 | Then run target :targetName in mix :mixName should have :errorCode target error | Run Target Assertion |
Appendix C: Regex Pattern Summary
Quick reference for all step definition regex patterns.
Configuration:
^The configuration "([^"]+)" is loaded$
Run File Actions:
^Upload the run file "([^"]+)"$
^Open the run file "([^"]+)"$
^Re analyse the run file$
^Archive the "([^"]+)"$
Well Resolutions:
^Apply resolution to well "([A-H][0-9]{1,2})" with "([^"]+)"$
^Apply resolution to well "([A-H][0-9]{1,2})" with "([^"]+)" and "([^"]+)" to observation "([^"]+)" with "([^"]+)"$
Well Editing:
^Edit well "([A-H][0-9]{1,2})" with property "([^"]+)" and value "([^"]+)"$
Well Assertions:
^well "([A-H][0-9]{1,2})" should have "([^"]+)" outcome$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" mix$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" sample role$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" accession$
^well "([A-H][0-9]{1,2})" should have "(true|false)" is crossover$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" sample name$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" extraction date$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" extraction instrument$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" batch number$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" specimen name$
Observation Assertions:
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" final cls$
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" final ct$
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" quantity$
Run Target Assertions:
^run target "([^"]+)" in mix "([^"]+)" should have "([^"]+)" target error$
Run Status:
^The run file "([^"]+)" should contains run status "([^"]+)"$
^No well have "([^"]+)"$
Appendix D: File Naming Conventions
Run Files
Norovirus Tests:
- Format:
{MMDDYY}-{number}-{mix}.json - Example:
040122-0000234-NOR1NOR2.json - Suffix indicates mixes:
-NOR1,-NOR2,-NOR1NOR2 - Sequential uploads use suffix:
040122-0000234B-NOR1.json,040122-0000234C-NOR1.json
Test-Specific Files:
- Descriptive names:
RQUANTASQUAL_2439_1.json,COVID_OUTCOMES_NEW.json - Version suffixes:
_1,_2,_II - Test type prefixes:
MPX_,PICQUAL_,LINEAR_REGRESSION_
Configuration Files
Lab-Specific:
- Format:
{Lab} {version}.xlsxor{Lab}_{version}.xlsx - Examples:
Viracor v3.xlsx,Quest_EZ_PP_v30.xlsx
Environment:
Viracor_PRE_PROD.xlsx- Pre-productionViracor_PROD.xlsx- ProductionViracor_PPP.xlsx- Production variant
Test-Specific:
Exclude_from_IC_delta_check_1.xlsx- Specific feature testSYSTEMIC_INHIBITION.xlsx- Inhibition test config
Behat Optimizer (Parallel Config-Consolidated Runs)
The behat-optimizer.py script (tests/scripts/behat-optimizer.py) automates parallel Behat execution by consolidating feature files that share identical configuration xlsx files.
How It Works
- Scan -- Walks all
.featurefiles undertests/exports/cucumber/, extracts config file references fromGiven The configuration ... is loadedsteps. - Hash -- Computes SHA-256 of each referenced xlsx config file. Features referencing the same config (identical hash) are grouped together.
- Consolidate -- Generates merged
.featurefiles with@USE_SAME_CONFIGon line 1. Background sections are inlined into each scenario. Scenario Outlines with Examples tables are preserved. Placeholder configs (e.g., from Scenario Outline<config>columns) are resolved. - Run -- Distributes consolidated files across DB pool workers using LPT (Longest Processing Time) bin-packing for load balancing. Each worker gets its own DB (
pcrai_test_01throughpcrai_test_10) and log file.
CLI Reference
| Subcommand | Description | Key Options |
|---|---|---|
scan | Analyze config groupings and optimization potential | --show-groups for detailed per-group file listing |
generate | Create consolidated .feature files | --dry-run preview, --permanent keep files |
run | Generate + execute in parallel | --workers N (default 10), --pre-migrate, --skip-migrate-fresh, --skip-list PATH, --permanent, --dir PATH, --target-version X.Y.Z |
skip-json | Generate skip-list JSON from @SUPERSEDED_BY_* tags | --init create initial file, --output PATH |
Default Behavior: migrate:fresh per File
The optimizer runs migrate:fresh before each consolidated file by default. This ensures a clean database state and avoids FK constraint cascade failures that occur when leftover data from a previous file's tests conflicts with the next file's migrations.
The --skip-migrate-fresh flag opts into the old (faster) behavior where migration is skipped between files. This is useful for quick smoke tests but produces a ~73% pass rate due to FK constraint errors.
Multi-Config File Detection
The optimizer automatically detects feature files where different scenarios reference different config xlsx files. These files are split into separate virtual entries, each with its own config hash and group assignment. Previously, multi-config files were assigned to a single group based on the first config found, causing silent wrong-config failures for scenarios referencing a different config.
Commands
cd /shared/code/req_docs/code
# Full suite: V3 API + Legacy API + Browser (~30 min with pre-migrate)
python3 tests/scripts/behat-optimizer.py run --workers 10 --pre-migrate --rerun-failures \
--suite all --browser-workers 6 \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md
# V3 API only (excludes legacy, ~25 min)
python3 tests/scripts/behat-optimizer.py run --workers 10 --dir tests/exports/cucumber/v3
# Run tests for a specific deployed version (excludes other version's scenarios)
python3 tests/scripts/behat-optimizer.py run --workers 10 --target-version=3.0.0
# Scan — see config groupings and optimization potential
python3 tests/scripts/behat-optimizer.py scan
python3 tests/scripts/behat-optimizer.py scan --show-groups # detailed view
# Generate consolidated files (dry-run first)
python3 tests/scripts/behat-optimizer.py generate --dry-run
python3 tests/scripts/behat-optimizer.py generate
# Generate skip-list from @SUPERSEDED_BY_* tags
python3 tests/scripts/behat-optimizer.py skip-json --init
python3 tests/scripts/behat-optimizer.py run --skip-list tests/scripts/behat-skip-list.json
Architecture Notes
- Load balancing: LPT (Longest Processing Time first) bin-packing assigns the largest consolidated files to the least-loaded worker, minimizing total wall-clock time.
- Per-worker logs: Each worker writes to
/tmp/behat-logs/worker-NN.logfor debugging. - JSON results: Run results are output as structured JSON for programmatic consumption.
- Output directory: Consolidated files go to
tests/exports/cucumber/_consolidated/(gitignored). Use--permanentto keep them after a run; otherwise they are cleaned up automatically.
Skip-List JSON
The skip-list allows excluding specific feature files from a run. Generate the initial file from @SUPERSEDED_BY_* tags in the feature files:
python3 tests/scripts/behat-optimizer.py skip-json --init
This creates tests/scripts/behat-skip-list.json. Edit it manually to add or remove entries. Pass it to run:
python3 tests/scripts/behat-optimizer.py run --skip-list tests/scripts/behat-skip-list.json
Edge Cases Handled
- Scenario Outlines with Examples: Examples tables are preserved intact during consolidation.
- Background inlining: Background steps from source files are inlined into each scenario in the consolidated file (since the consolidated file has its own Background for config loading).
- Placeholder config resolution: Scenario Outlines using
<config>placeholders in the config step are resolved before hashing to ensure correct grouping.
Time Estimates
- First scenario per config group: ~2 min (full config upload and migration).
- Subsequent scenarios (same config,
@USE_SAME_CONFIG): ~15 sec each. - Default mode (migrate:fresh per file): ~118 min wall-clock with 10 workers (768 scenarios, 99% pass rate). Slower due to MySQL DDL lock contention from 10 concurrent
migrate:freshoperations (~7 min per file vs ~30s solo). - Fast mode (
--skip-migrate-fresh): ~28 min wall-clock with 10 workers (73% pass rate). FK constraint cascade failures account for most failures.
Performance: MySQL DDL Lock Contention
When running 10 workers in default mode, each worker's migrate:fresh acquires MySQL metadata locks that block other workers' DDL operations. This serializes what should be parallel migrations, inflating per-file time from ~30s (solo) to ~7 min (10 concurrent).
Mitigation strategies:
- Reduce workers (e.g.,
--workers 4) -- less contention, faster per-file, similar total wall-clock time. - Use
--skip-migrate-fresh-- fastest, but ~27% of tests fail from stale data/FK cascades. Useful for targeted reruns where DB state is already clean. - Future: truncate-all approach -- replacing
migrate:freshwith table truncation would avoid DDL entirely. Not yet implemented.