Skip to main content
Version: 3.0.1

Developer Testing Guide

1. Overview

Purpose: Reference manual for Behat step definitions, parameters, and workflow patterns Audience: Developers, QA Engineers, AI agents (as lookup reference)

This guide is the reference manual — step definitions, parameter tables, regex patterns, and complete workflow examples. For how to create tests (creation workflow, gotchas, config pitfalls, execution commands), see Behat Test Creation Guide.

2. Testing Stack

ComponentTechnologyLocation
BDD FrameworkBehat 3.xcode/features/
Step DefinitionsPHP 8 Attributescode/features/bootstrap/
Test RunnerPHPUnit 12Via Behat integration
CI PipelineGitHub Actions.github/workflows/behat.yml

3. Step Definitions Reference

This section documents all 25 available step definitions organized by functional category.

3.1 Configuration Steps

Step 1: Load Configuration

Given The configuration :configFileName is loaded

Parameters:

  • configFileName: Filename including extension (e.g., "example.xlsx")

Description: Uploads a configuration sheet to initialize the test environment.

Shared Configuration Optimization: Tag the Feature with @USE_SAME_CONFIG to share configuration across scenarios within a test execution. When used, subsequent configuration load calls will be skipped with warning: "this config import is ignored, instead using shared config."

Example:

Given The configuration "test-config.xlsx" is loaded

Step 25: Set Client Configuration Flag

Given the client configuration :configName is set to :configValue

Parameters:

  • configName: The client configuration key name (e.g., use_sample_type, use_extraction_instruments)
  • configValue: The value to set (e.g., "true", "false")

Description: Toggles a client configuration flag after config import. This is useful for testing runtime behavior that the import validator would otherwise block. For example, use_sample_type cannot be set to false at import time because SpecimenNameValidator rejects specimen-tagged rows, but this step can toggle it after a successful import.

Added: TI-003 resolution (Session 16)

Example:

Given The configuration "quest-v3.xlsx" is loaded
And the client configuration "use_sample_type" is set to "false"
When Upload the run file "run1.json"

3.2 Run File Steps

Step 2: Upload Run File

When Upload the run file :fileName

Parameters:

  • fileName: Filename including extension (e.g., "example.json")

Description: Uploads a JSON run file for processing.

Exceptions:

  • No Run found for run name: {run name} - When no such run file is uploaded

Example:

When Upload the run file "ENT_GAPD_134.json"

Step 3: Open Run File

When Open the run file :runName

Parameters:

  • runName: Name of run (from run_info), not the filename

Description: Opens an existing run file by its run name.

Exceptions:

  • No Run found for run name: {run name} - When no such run file is uploaded

Example:

When Open the run file "ENT_GAPD_134"

Step 7: Re-analyse Run File

When Re analyse the run file

Parameters: None

Description: Re-analyses an opened run file. Required after edits or resolutions to apply changes.

Exceptions:

  • No run file is open - When no run file is open prior to action

Example:

When Edit well "A1" with property "extraction_instrument" and value "QIASymphony"
And Re analyse the run file

Step 8: Assert Run Status

Then The run file :fileName should contains run status :runStatus

Parameters:

  • runName: Name of run (from run_info)
  • runStatus: Expected resolution status

Description: Asserts the run file has the expected resolution status.

Exceptions:

  • No Run found for run name: {run name} - When no such run file is uploaded

Step 22: Archive Run File

When Archive the :fileName

Parameters:

  • fileName: Name of the run file to archive

Description: Archives a run file.

Preconditions:

  • Run must exist
  • Run must be accessible to user
  • Archive tag must be available

Example:

When Archive the "ENT_GAPD_134.json"

3.3 Well Edit Steps

Step 4: Edit Well Property

When Edit well :wellNumber with property :property and value :value

Parameters:

  • wellNumber: Well identifier (e.g., "A1", "B2")
  • property: Property name to edit (see table below)
  • value: New value for the property

Description: Edits a property of a specific well.

Preconditions:

  • A run file must be opened before this action
  • The specified well must exist in the run
  • The specified property must be in available properties
  • The run must be re-analysed to affect the modification

Available Properties:

PropertyValidation Rules
extraction_instrumentValidates against role (non-extraction wells cannot have e-instrument). Must be existing Extraction Instrument Name.
accessionWell must be a patient well (cannot edit for Control well). Must be string, max 191 characters.
is_crossoverMust be a boolean value (true, false, 0, 1)
crossover_roleWell must be a crossover well. Must be name of existing Role, max 191 characters.
extraction_date"Allow Edit Extraction Information" must be enabled in Client Configuration. Well must be an Extraction Well. Date cannot precede runfile created date.
sample_roleMust be valid role alias from role_mappings table for the well's mix.

Exceptions:

  • No run file is open
  • Well {well number} does not exist in the run.
  • Editing :property is not supported
  • Cannot edit extraction instrument for non extraction well
  • Extraction Instrument 'ABC' not found
  • Cannot edit accession for Control well
  • invalid accession
  • The crossover value must be a boolean
  • invalid crossover role value
  • Only crossover well can edit Crossover Roles
  • Crossover Role 'ABC' not found
  • Not allowed Edit Extraction Information in Client Configurations
  • Well C1 is not an extraction well
  • Extraction Date cannot precede runfile created date
  • Sample role 'example' is not a valid sample role.

Example:

When Open the run file "ENT_GAPD_134_2.json"
And Edit well "A1" with property "extraction_instrument" and value "QIASymphony"
And Re analyse the run file

3.4 Resolution Steps

Step 5: Apply Resolution to Well

When Apply resolution to well :wellNumber with :resolution

Parameters:

  • wellNumber: Well identifier (e.g., "A1", "B1")
  • resolution: Use dropdown message if resolution is a dropdown, otherwise resolution message

Description: Applies a resolution to a well. Run must be re-analysed to affect the modification.

Exceptions:

  • No run file is open
  • Well {well number} does not exist in the run.
  • Resolution is not allowed for the selected well

Example:

When Open the run file "example.json"
And Apply resolution to well "B1" with "RXT all"
And Re analyse the run file

Step 6: Apply Resolution with Individual Curve Result

When Apply resolution to well :wellNumber with :resolution and :type to observation :observation with :value

Parameters:

  • wellNumber: Well identifier (e.g., "A2")
  • resolution: Resolution message (e.g., "Set individual curve results")
  • type: Must be "Manual classification" or "Preferred CT provider"
  • observation: Target/observation name (e.g., "HDV")
  • value: Must be "Neg", "Pos", "MACHINE", or "DXAI"

Description: Applies a resolution with individual curve result. Resolution must have type DISCREPANT OBSERVATIONS. Only 1 observation is supported to resolve.

Exceptions:

  • No run file is open
  • Well {well number} does not exist in the run.
  • Observation '{target name}' does not exist in the Well '{well number}'.
  • Resolution is not allowed for the selected well
  • Individual Curve results are not allowed with selected resolution: '{resolution message}'
  • Incorrect observation curve resolution type: '{type}'
  • Incorrect observation curve resolution value: '{value}'

Warning: 'Two Step Manage Result Workflow' is Selected! But not supported. 'One step workflow' is used instead. - shown when Two Step Workflow is selected

Example:

When Open the run file "example.json"
And Apply resolution to well "A2" with "Set individual curve results" and "Manual classification" to observation "HDV" with "Neg"
And Re analyse the run file

3.5 Well Assertion Steps

Step 9: Assert Well Outcome

Then well :wellNumber should have :outcomeMessage outcome

Parameters:

  • wellNumber: Well identifier
  • outcomeMessage: Outcome message (not code), e.g., "Detected", "Positive", "Negative"

Example:

Then well "A1" should have "Detected" outcome

Step 10: Assert Well Mix

Then well :wellNumber should have :mixName mix

Parameters:

  • wellNumber: Well identifier
  • mixName: Name of the mix (e.g., "HEV")

Example:

Then well "A1" should have "HEV" mix

Step 11: Assert Well Is Crossover

Then well :wellNumber should have :isCrossover is crossover

Parameters:

  • wellNumber: Well identifier
  • isCrossover: Must be boolean ("true" or "false")

Example:

Then well "C3" should have "false" is crossover

Step 12: Assert Well Crossover Role

Then well :wellNumber should have :crossoverRoleAlias is crossover role

Parameters:

  • wellNumber: Well identifier
  • crossoverRoleAlias: Role name (e.g., "CC1")

Example:

Then well "C3" should have "CC1" is crossover role

Step 13: Assert Well Extraction Date

Then well :wellNumber should have :extractionDate extraction date

Parameters:

  • wellNumber: Well identifier
  • extractionDate: Date value (e.g., "20240901")

Example:

Then well "C3" should have "20240901" extraction date

Step 14: Assert Well Sample Role

Then well :wellNumber should have :sampleRole sample role

Parameters:

  • wellNumber: Well identifier
  • sampleRole: role_alias from role_to_target_mappings (e.g., "patient")

Example:

Then well "C3" should have "patient" sample role

Step 15: Assert Well Sample Name

Then well :wellNumber should have :sampleName sample name

Parameters:

  • wellNumber: Well identifier
  • sampleName: role_alias for controls, accession for patients

Example:

Then well "C3" should have "123" sample name

Step 16: Assert Well Extraction Instrument

Then well :wellNumber should have :extractionInstrumentName extraction instrument

Parameters:

  • wellNumber: Well identifier
  • extractionInstrumentName: Name from extraction instrument (e.g., "E01")

Example:

Then well "A1" should have "E01" extraction instrument

Step 17: Assert Well Batch Number

Then well :wellNumber should have :batchNumber batch number

Parameters:

  • wellNumber: Well identifier
  • batchNumber: Batch number of well

Example:

Then well "A1" should have "1" batch number

Step 18: Assert Well Specimen Name

Then well :wellNumber should have :specimenName specimen name

Parameters:

  • wellNumber: Well identifier
  • specimenName: Name of specimen (e.g., "plasma")

Example:

Then well "A1" should have "plasma" specimen name

Step 23: Assert Well Accession

Then well :wellNumber should have :accession accession

Parameters:

  • wellNumber: Well identifier
  • accession: Expected accession value for the well

Example:

Then well "A1" should have "ACC-12345" accession

Notes: Asserts the accession property of a well matches the expected value. Only applicable to patient wells (control wells do not have accessions). Useful for verifying accession edits via Step 4 and for ANALYZER-related testing where accession validation is significant.


Common Exceptions for Well Assertion Steps:

  • No run file is open - When no run file is open prior to action
  • Well {well number} does not exist in the run. - When specified well doesn't exist

3.6 Observation Assertion Steps

Step 19: Assert Observation Final Classification

Then well :wellNumber observation :targetName should have :finalCls final cls

Parameters:

  • wellNumber: Well identifier
  • targetName: Target/observation name
  • finalCls: Final classification value (e.g., "Neg", "Pos")

Example:

Then well "A1" observation "T1" should have "Neg" final cls

Step 20: Assert Observation Final Ct

Then well :wellNumber observation :targetName should have :finalCt final ct

Parameters:

  • wellNumber: Well identifier
  • targetName: Target/observation name
  • finalCt: Final Ct value (e.g., "31")

Example:

Then well "A1" observation "T1" should have "31" final ct

Step 21: Assert Observation Quantity

Then well :wellNumber observation :targetName should have :quantity quantity

Parameters:

  • wellNumber: Well identifier
  • targetName: Target/observation name
  • quantity: Full quantity value (e.g., "4234234")

Example:

Then well "A1" observation "T1" should have "4234234" quantity

Preconditions for Observation Assertion Steps:

  • A run file must be opened before this action
  • The specified well must exist in the run
  • The specified target name must exist within the well as observation

Exceptions:

  • No run file is open
  • Well {well number} does not exist in the run.
  • Observation 'T1' does not exist in the Well 'A1'.

3.7 Run Target Assertion Steps

Step 24: Assert Run Target Error

Then run target :targetName in mix :mixName should have :errorCode target error

Parameters:

  • targetName: Target name within the run target (e.g., "NOR1", "IC")
  • mixName: Mix name associated with the target (e.g., "NOR1", "HEV")
  • errorCode: Expected error code string (e.g., "BAD_EFFICIENCY", "INSUFFICIENT_STANDARD_CONTROLS")

Example:

Then run target "NOR1" in mix "NOR1" should have "BAD_EFFICIENCY" target error

Notes: Critical for ERRORCODES gap testing. Reads error_codes from RunTarget and asserts a specific error code is present. The step looks up run targets by matching both target.target_name and target.mix.mix_name, then checks the error_codes array for the specified code.

Preconditions:

  • A run file must be opened before this action
  • The specified target must exist in the run's run_targets for the given mix

Exceptions:

  • No run file is open
  • Run target '{targetName}' for mix '{mixName}' does not exist in the run.
  • Run target '{targetName}' in mix '{mixName}' does not have '{errorCode}' target error. Actual target errors: [...]

4. Gherkin Syntax Conventions

4.1 Feature File Structure

@FEATURE_TAG_1 @FEATURE_TAG_2
Feature: Internal operations
In order to stay secret
As a secret organization
We need to be able to erase past agents' memory

Background:
Given there is agent "A"
And there is agent "B"

Scenario: Erasing agent memory
Given there is agent "J"
And there is agent "K"
When I erase agent "K"'s memory
Then there should be agent "J"
But there should not be agent "K"

Scenario Outline: Erasing other agents' memory
Given there is agent "<agent1>"
And there is agent "<agent2>"
When I erase agent "<agent2>"'s memory
Then there should be agent "<agent1>"
But there should not be agent "<agent2>"

Examples:
| agent1 | agent2 |
| D | M |

4.2 Scenario vs Scenario Outline

  • Scenario: Use for single test cases with fixed data
  • Scenario Outline: Use when repeating the same steps with different data values

4.3 Using Datasets (Examples)

  • Include a dataset when repeating tests with multiple example data (2+ rows)
  • Do NOT include dataset when you have only 1 row (no repeat tests needed)
Scenario Outline: Verify outcome for different wells
When Upload the run file "example.json"
Then well <well> should have <outcome> outcome

Examples:
| well | outcome |
| A1 | Positive |
| A2 | Negative |

5. Jira Integration

Jira ElementGherkin Element
Test execution card titleFeature
Test card titleScenario or Scenario Outline
DatasetExamples
Execution card labelsFeature Tags

IMPORTANT: Do NOT include Feature: or Scenario: keywords in Jira test cards - they are added automatically during export.

6. Feature Tags

TagRequiredLevelEffect
@REQ_BT-xxxxYesFeatureLinks all scenarios to a Jira requirement
@TEST_BT-xxxxYesScenarioLinks scenario to a specific Jira test case
@USE_SAME_CONFIGRecommendedFeature (line 1)Share configuration across all scenarios. Must be on line 1 of the file, before Feature:. Scenario-level placement is silently ignored. All-or-nothing: when present, only the first config import runs; all others are skipped. Files with multiple configs must NOT use this tag — split into separate files instead.
@COMBINED_OUTCOMENoScenarioTest involves outcomes across multiple runs/mixes
@UNIQUENoScenarioTest uses unique/isolated test data
@UNIVERSALNoScenarioTest applies universally across configurations
@EXAMPLE_TESTNoScenarioExample/demonstration test (not in core regression)
@V3_0_0ConditionalScenarioScenario with assertions correct for v3.0.0 behavior. Excluded when running latest version (default).
@V3_0_1ConditionalScenarioScenario with assertions correct for v3.0.1 behavior. Excluded when running --target-version=3.0.0.

Tag Usage Example:

@REQ_BT-4051 @USE_SAME_CONFIG
Feature: Combined outcomes for multiple mixes

@TEST_BT-5707 @COMBINED_OUTCOME
Scenario Outline: Combined outcome in two runs
Given The configuration "Viracor_PROD.xlsx" is loaded
When Upload the run file "<runFile1>"
And Open the run file "<runFile1>"
Then well "A1" should have "<outcome>" outcome

Examples:
| runFile1 | outcome |
| NORO_123.json | Not Detected |
| NORO_124.json | Detected |

Tag Combinations:

@REQ_BT-4051 @USE_SAME_CONFIG
Feature: Combined outcomes for multiple mixes
# All scenarios below use the same config — @USE_SAME_CONFIG on line 1

@TEST_BT-5707 @REQ_BT-4070
Scenario Outline: Test with multiple requirements
Given The configuration "Viracor_PROD.xlsx" is loaded # ← LOADED (first)
...

@TEST_BT-5300
Scenario Outline: Shared config test
Given The configuration "Viracor_PROD.xlsx" is loaded # ← SKIPPED (reused)
...

@TEST_BT-1234 @UNIQUE @UNIVERSAL
Scenario: Rare universal edge case
Given The configuration "Viracor_PROD.xlsx" is loaded # ← SKIPPED (reused)
...
caution

@USE_SAME_CONFIG placed on a scenario line is silently ignored — the code in BaseFeatureContext.php:122 only checks feature-level tags. If you need different configs, use separate .feature files.

6.1 Multi-Version Testing

This is a multi-tenant medical device SaaS where clients run different versions on independent upgrade cycles. The test suite must support running against any deployed version to investigate client bug reports.

Tagging scheme:

TagMeaning
(no version tag)Universal — scenario passes on all supported versions
@V3_0_0Assertions correct for v3.0.0 behavior only
@V3_0_1Assertions correct for v3.0.1 behavior only

Rules:

  • Tests that pass on all versions get no version tag (the default/majority case).
  • When behavior changes between versions, both versions of the test exist with the same BT key, TV tags, and fixture files — only the assertions differ.
  • The --target-version flag on behat-optimizer.py controls which version set runs:
    • --target-version=3.0.0 excludes @V3_0_1 scenarios (runs v3.0.0 + universal)
    • --target-version=3.0.1 excludes @V3_0_0 scenarios (runs v3.0.1 + universal)
    • Default (no flag) = latest version = excludes @V3_0_0

Creating version-split scenarios:

  1. Copy the scenario (or Scenario Outline row) with the current assertions
  2. Tag the copy @V3_0_0
  3. Update the original with the new version's assertions
  4. Tag the original @V3_0_1
  5. Both copies share the same @TEST_BT-xxxx key and @TV-* tags

Example:

@TEST_BT-5658 @COMBINED_OUTCOME @UNIVERSAL @V3_0_0
Scenario: SYSINH combined outcome (v3.0.0)
# ... assertions for v3.0.0 behavior ...

@TEST_BT-5658 @COMBINED_OUTCOME @UNIVERSAL @V3_0_1
Scenario: SYSINH combined outcome (v3.0.1)
# ... assertions for v3.0.1 behavior (e.g., archive dependency guard) ...

Running for a specific version:

# Run tests for v3.0.0 deployment
python3 tests/scripts/behat-optimizer.py run --workers 10 --target-version=3.0.0

# Run tests for latest version (default)
python3 tests/scripts/behat-optimizer.py run --workers 10

7. Test Organization

7.1 Directory Structure

code/
├── features/
│ ├── bootstrap/
│ │ ├── BaseFeatureContext.php # Base context with hooks
│ │ └── FeatureContext.php # Step definitions
│ └── *.feature # Gherkin feature files
├── behat.yml # Behat configuration
└── .github/workflows/behat.yml # CI configuration

7.2 Test File Organization

  • Feature files: code/features/*.feature
  • Context classes: code/features/bootstrap/

7.3 Feature File Naming Convention

{priority}_{JIRA_KEY(s)}.feature

Examples:

  • 1_BT-5855,BT-5854,BT-5842,BT-5840,BT-5839(+218).feature
  • 11_BT-5201.feature

7.4 Support Files

Each test scenario typically requires:

  1. Configuration file (.xlsx) - Kit configuration in support_files/ or shared configs
  2. Run file(s) (.json) - PCR run data in support_files/BT-xxxx/

Ensure support files are placed in the correct directory matching the Jira issue key. See the Run File Schema Guide for JSON run file structure.

7.5 File Duplication Rule

IMPORTANT: Each test (e.g., BT5001) needs its own folder with all required files. Even if a file is reused for multiple tests (e.g., BT5001, BT5002, BT5003), it must be duplicated in each directory.

Example: All files for test BT5001 should be in Test Files / Behat / BT5001.

8. Test Repetition Guidelines

  1. Same steps + different data = Use same test with a new dataset row, and update scenario description to be broader to cover the dataset

  2. Different steps (e.g., upload two runs, check results as you progress, then reanalyse the first one and check results) = New test

9. Execution

For full execution commands (environment variables, DB pool, parallel execution), see Behat Test Creation Guide.

CI Pipeline: Tests run automatically via GitHub Actions on push/PR. See .github/workflows/behat.yml.

Regression Testing Best Practice: Run the same final regression Behats multiple times in parallel prior to marking regression tests as complete. This isolates transient issues and avoids false passes.

10. Bug Reporting Requirements

  1. Bug reports must include standalone tests: Any application bugs reported to Dev should provide the minimum Behat Test Jira issue cards to fix and ensure the feature is working correctly. These cards should be standalone and should NOT require a particular Execution issue to be used to recreate.

  2. Report Behat system failures: If the issue cannot be recreated with a standalone test, but does recreate with the Execution, this indicates a failure with the Behat testing system and should be reported with the execution and standalone cards.

11. Parameter Reference

This section provides exhaustive reference tables for values used in step definition parameters. For domain terminology, see the Testing Data Dictionary.

11.1 Well Positions

Wells are identified by a letter (A-H) followed by a number (1-12), representing positions on a standard 96-well PCR plate.

A1  A2  A3  A4  A5  A6  A7  A8  A9  A10 A11 A12
B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 B12
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12
E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 E12
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12

Common Usage Patterns:

  • Column 1-2: Often standards/controls (S2, S4, S6, NEC, POS)
  • Remaining columns: Patient samples
  • Row G-H: Sometimes additional controls or problematic samples in test scenarios

11.2 Resolution Types

Resolutions are applied to wells with errors to specify how to handle them.

Basic Resolutions

Resolution TypeDescriptionCommon Use Case
Re-extract all samplesRe-extract and re-amplify all samplesExtraction control failure, contamination
Re-amplify all samplesRe-amplify without re-extractionAmplification failure, control failures
Re-amplify positive samplesRe-amplify only positive detectionsStandards failure with positive patient samples
Exclude this well from being exportedMark well to skip exportPermanent exclusion, invalid data

Westgard Rule Overrides

Resolution TypeDescription
Ignore 12S errorOverride Westgard 1:2S rule failure
Ignore 13S errorOverride Westgard 1:3S rule failure
Ignore 22S errorOverride Westgard 2:2S rule failure
Ignore 22S13S errorOverride combined Westgard 2:2S and 1:3S rule failure

Advanced Resolutions

Resolution TypeParametersDescription
Set individual curve results with Manual classificationObservation, Value (Pos/Neg)Override classification for discrepancy
Set individual curve results with Preferred CT providerObservation, ProviderSelect CT source for discrepancy

11.3 Outcome Values

Outcomes represent the final result/status of a well.

Success Outcomes

OutcomeMeaning
DetectedTarget detected
Not DetectedTarget not detected (negative)
Not detectedVariant capitalization
Not detected (AUTO)Auto-classified negative
Control PassedControl well passed QC
Control Passed.Variant with period
Crossover PassedCrossover validation passed
PassGeneric pass status

Quantitative Outcomes

OutcomeMeaning
Detected:<430Detected with quantity less than 430
96,700Quantity value
49,200Quantity value
9,999Formatted quantity
999,999Formatted quantity

Review/Action Required Outcomes

OutcomeMeaning
RPTRepeat/re-amplify required
RXTRe-extract required
FailWell failed
InconclusiveResult inconclusive
Inconclusive (AUTO)Auto-classified inconclusive
Result is inconclusiveVariant phrasing
Result is inconclusive (AUTO)Auto-classified variant
Well excludedWell manually excluded
Flag as High CT - do not export without reviewHigh CT warning

Error Outcomes - Control Failures

OutcomeCategory
The positive control has failed.Positive control failure
NEC failure.Negative extraction control failure
A previous Westgard control has failed for this mix.Historical Westgard failure

Error Outcomes - Westgard Rules

OutcomeRule
This control has failed the Westgard 1:2S rule.1 control > 2 SD
This control has failed the Westgard 1:3S rule.1 control > 3 SD
This control has failed the Westgard 2:2S rule.2 consecutive controls > 2 SD
This control has failed the Westgard 1:3S 2:2S rule.Combined 1:3S and 2:2S
This control has failed the Westgard 7T rule.7 consecutive on same side of mean
This control has failed the Westgard 7T and 1:3S rule.Combined 7T and 1:3S

Error Outcomes - Standards/Calibration

OutcomeMeaning
Standards have failed. The error must be resolved before further evaluation of wellsStandard curve failed
In-run standards failed.Standards in current run failed
BAD_EFFICIENCYPCR efficiency outside acceptable range
BAD_GRADIENTStandard curve gradient unacceptable
INSUFFICIENT_STANDARD_CONTROLSNot enough valid standards
STANDARD_WITHOUT_QUANTStandard missing quantification

Error Outcomes - Inhibition

OutcomeMeaning
The IC is inhibitedInternal control inhibited
Internal Control is inhibited. Report 'INHN'.IC inhibition with report code
SYSTEMIC_INHIBITON_DETECTEDSystemic inhibition across run
RXT. Multiple internal control failures. Re-extract run. Possible systemic failure.Multiple IC failures
Cannot run IC rule due to no negative controls in runIC rule validation impossible

Error Outcomes - Discrepancies

OutcomeMeaning
There are one or more classification discrepancies.Instrument vs pcr.ai mismatch
Review required. The instrument amplification classification of this patient does not match the pcr.ai classification.Patient discrepancy
Review required. The instrument amplification classification of this control does not match the pcr.ai classification.Control discrepancy
Review required. The amplification classification of this well does not match its previous classification.Repeat discrepancy - classification
Review required. The quantity of this well is not within 0.5Log of its previous quantity.Repeat discrepancy - quantity

Error Outcomes - Associated Controls

OutcomeMeaning
An associated extraction control has failed. That error must be resolved before this well can be exported.Linked extraction control error
This well is missing the required associated extraction controls. Review required.Missing extraction control link
An associated control has failed. That error must be resolved before this well can be exported.Generic associated control failure

Error Outcomes - Missing Mixes

OutcomeMix Referenced
Mixes missing or in error. Reanalyze this well after uploading NOR1 or resolving errorNOR1
Mixes missing or in error. Reanalyze this well after uploading NOR2 or resolving errorNOR2
Mixes missing or in error. Reanalyze this well after uploading GAPD or resolving errorGAPD

Error Outcomes - Special Rules

OutcomeMeaning
NOR rule failure. Re-extract sampleNorovirus-specific rule failure
'The control outside of the expected range. 'Control quantity out of range
'Multiple QC errors on run. 'Multiple QC failures
Well should be replated due to multiple qc errors.Replating recommendation

11.4 Mix Names

Mixes define the analyte/target assays tested in each well.

MixFull NameCategory
NOR1Norovirus Genogroup IViral
NOR2Norovirus Genogroup IIViral
CMVCytomegalovirusViral
BKVBK VirusViral
EBVEpstein-Barr VirusViral
HDVHepatitis D VirusViral
HEVHepatitis E VirusViral
EBVQEBV QuantitativeViral (Quant)
GAPDGAPDHControl/IC
ENTEnterovirusViral

Notes:

  • Most tests use one or two mixes per well
  • Norovirus tests commonly use NOR1+NOR2 combination
  • GAPD often serves as internal control
  • "Q" suffix typically indicates quantitative assay variant

11.5 Sample Roles

Sample roles classify the purpose of each well.

RoleFull NameDescription
PatientPatient SampleClinical specimen
NECNegative Extraction ControlExtraction negative control
POSPositive ControlPositive amplification control
HI POSHigh Positive ControlHigh-level positive control
LO POSLow Positive ControlLow-level positive control
S2Standard 2Calibration standard (level 2)
S4Standard 4Calibration standard (level 4)
S6Standard 6Calibration standard (level 6)
CC1Crossover Control 1Crossover validation control
CC2Crossover Control 2Crossover validation control

Specialized Roles (EBV Tests):

RoleDescription
EBVPCEBV Positive Control
EBVNCEBV Negative Control
QEBVLPCEBV Quantitative Low Positive Control
QEBVNCEBV Quantitative Negative Control

11.6 Run Status Values

Run-level status messages indicating overall run state.

Reanalysis Required Statuses

StatusTrigger
Reanalysis required (Missing mixes uploaded)Missing mix subsequently uploaded
Reanalysis required (Westgard)Westgard rule resolution applied
Reanalysis required (Missing mixes)Waiting for missing mix upload
StatusExport State
Results for wells in this run may be affected by recently archived runs. Reanalysis required. ALL_WELLS_READY_FOR_EXPORTAll wells ready
Results for wells in this run may be affected by recently archived runs. Reanalysis required. NO_EXPORT_ERRORS_TO_RESOLVENo export blockers
Results for wells in this run may be affected by recently archived runs. Reanalysis required. SOME_WELLS_READY_FOR_EXPORT_WITH_ERRORS_TO_RESOLVEPartial export ready
StatusExport State
Results for wells in this run may be affected by recently edited wells. Reanalysis required. All wells ready for exportAll wells ready
Results for wells in this run may be affected by recently edited wells. Reanalysis required. No export - errors to resolveErrors present
StatusExport State
Results for wells in this run may be affected by recently resolved wells. Reanalysis required. All wells ready for exportAll wells ready
Results for wells in this run may be affected by recently resolved wells. Reanalysis required. No export - errors to resolveErrors present

Error Statuses

StatusMeaning
No Resolution No export - errors to resolveErrors present, no resolution applied

11.7 Common Observations/Targets

Observation names used in observation-level assertions.

ObservationDescription
NOR1Norovirus GI
NOR2Norovirus GII
CMVCytomegalovirus
BKVBK Virus
ICInternal Control
QIPCQuantitative Internal Positive Control
QBKQuantitative BK
QBKQBK Quantitative
QMPXVMonkeypox Virus Quantitative
QOPXVOrthopoxvirus Quantitative
GAPDHGAPDH control
HDVHepatitis D Virus
HEVHepatitis E Virus
ZIKAZika Virus

12. Workflow Patterns

Complete workflow examples demonstrating typical test patterns.

12.1 Basic Detection Test

Scenario: Upload a run file, verify patient sample detected, control passed

Scenario: Basic NOR1 Detection
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "NORO_101.json"
And Open the run file "NORO_101.json"
Then well "A1" should have "NOR1" mix
And well "A1" should have "Patient" sample role
And well "A1" observation "NOR1" should have "Pos" final cls
And well "A1" observation "NOR1" should have "31" final ct
And well "A1" should have "Detected" outcome
And well "G1" should have "POS" sample role
And well "G1" should have "Control Passed" outcome

Pattern Notes:

  • Configuration loaded first
  • Upload and open run file
  • Verify well properties (mix, role)
  • Check observation-level results (cls, ct)
  • Assert final outcome
  • Verify controls

12.2 Control Failure with Resolution

Scenario: Positive control fails, apply re-extract resolution, verify RXT outcome

Scenario: POS Control Failure Resolution
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "040122-0000234-NOR1NOR2.json"
And Open the run file "040122-0000234-NOR1NOR2.json"
Then well "G1" should have "NOR1" mix
And well "G1" should have "The positive control has failed." outcome
And well "H1" should have "An associated extraction control has failed. That error must be resolved before this well can be exported." outcome
When Apply resolution to well "G1" with "Re-extract all samples"
And Re analyse the run file
Then well "G1" should have "RXT" outcome
And well "H1" should have "RXT" outcome

Pattern Notes:

  • Initial state shows control failure
  • Associated wells show cascade error
  • Resolution applied to root cause well
  • Reanalysis triggered
  • All affected wells now show RXT (re-extract) outcome

12.3 Missing Mix Multi-Run Workflow

Scenario: Combined outcome test with missing mix, upload missing mix, reanalyze

Scenario: Missing Mix NOR2 Upload
Given The configuration "Viracor v3.xlsx" is loaded
When Upload the run file "051222-0000443-NOR1NOR2.json"
And Open the run file "051222-0000443-NOR1NOR2.json"
Then well "H1" should have "NOR1" mix
And well "H1" should have "Mixes missing or in error. Reanalyze this well after uploading NOR2 or resolving error" outcome
When Upload the run file "051222-0000443B-NOR2.json"
Then The run file "051222-0000443-NOR1NOR2.json" should contains run status "Reanalysis required (Missing mixes uploaded)"
When Open the run file "051222-0000443-NOR1NOR2.json"
And Re analyse the run file
Then well "H1" should have "Not Detected" outcome

Pattern Notes:

  • Initial upload incomplete (missing NOR2 mix)
  • Well shows missing mix error
  • Second run file uploaded with missing mix
  • First run file status changes to reanalysis required
  • Navigate back to first run
  • Reanalyze
  • Well now has final outcome

12.4 Westgard Rule Override

Scenario: Control fails Westgard 1:2S rule, override error

Scenario: Westgard 12S Override
Given The configuration "Viracor 2.19.0 Test.xlsx" is loaded
When Upload the run file "WESTGARD_12S.json"
And Open the run file "WESTGARD_12S.json"
Then well "A1" should have "CC1" sample role
And well "A1" should have "This control has failed the Westgard 1:2S rule." outcome
When Apply resolution to well "A1" with "Ignore 12S error"
And Re analyse the run file
Then well "A1" should have "Control Passed" outcome

Pattern Notes:

  • Westgard failure detected
  • Specific Westgard error message
  • Matching resolution type applied
  • Reanalysis clears error
  • Control now passes

Appendix A: Context Class Reference

ClassPurpose
BaseFeatureContextBase class with database refresh, authentication, file handling
FeatureContextDomain-specific step definitions for run file testing

Appendix B: Step Definition Quick Reference

#StepCategory
1Given The configuration :configFileName is loadedConfiguration
2When Upload the run file :fileNameRun File
3When Open the run file :runNameRun File
4When Edit well :wellNumber with property :property and value :valueWell Edit
5When Apply resolution to well :wellNumber with :resolutionResolution
6When Apply resolution to well :wellNumber with :resolution and :type to observation :observation with :valueResolution
7When Re analyse the run fileRun File
8Then The run file :fileName should contains run status :runStatusRun File
9Then well :wellNumber should have :outcomeMessage outcomeWell Assertion
10Then well :wellNumber should have :mixName mixWell Assertion
11Then well :wellNumber should have :isCrossover is crossoverWell Assertion
12Then well :wellNumber should have :crossoverRoleAlias is crossover roleWell Assertion
13Then well :wellNumber should have :extractionDate extraction dateWell Assertion
14Then well :wellNumber should have :sampleRole sample roleWell Assertion
15Then well :wellNumber should have :sampleName sample nameWell Assertion
16Then well :wellNumber should have :extractionInstrumentName extraction instrumentWell Assertion
17Then well :wellNumber should have :batchNumber batch numberWell Assertion
18Then well :wellNumber should have :specimenName specimen nameWell Assertion
19Then well :wellNumber observation :targetName should have :finalCls final clsObservation Assertion
20Then well :wellNumber observation :targetName should have :finalCt final ctObservation Assertion
21Then well :wellNumber observation :targetName should have :quantity quantityObservation Assertion
22When Archive the :fileNameRun File
23Then well :wellNumber should have :accession accessionWell Assertion
24Then run target :targetName in mix :mixName should have :errorCode target errorRun Target Assertion

Appendix C: Regex Pattern Summary

Quick reference for all step definition regex patterns.

Configuration:
^The configuration "([^"]+)" is loaded$

Run File Actions:
^Upload the run file "([^"]+)"$
^Open the run file "([^"]+)"$
^Re analyse the run file$
^Archive the "([^"]+)"$

Well Resolutions:
^Apply resolution to well "([A-H][0-9]{1,2})" with "([^"]+)"$
^Apply resolution to well "([A-H][0-9]{1,2})" with "([^"]+)" and "([^"]+)" to observation "([^"]+)" with "([^"]+)"$

Well Editing:
^Edit well "([A-H][0-9]{1,2})" with property "([^"]+)" and value "([^"]+)"$

Well Assertions:
^well "([A-H][0-9]{1,2})" should have "([^"]+)" outcome$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" mix$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" sample role$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" accession$
^well "([A-H][0-9]{1,2})" should have "(true|false)" is crossover$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" sample name$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" extraction date$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" extraction instrument$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" batch number$
^well "([A-H][0-9]{1,2})" should have "([^"]+)" specimen name$

Observation Assertions:
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" final cls$
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" final ct$
^well "([A-H][0-9]{1,2})" observation "([^"]+)" should have "([^"]+)" quantity$

Run Target Assertions:
^run target "([^"]+)" in mix "([^"]+)" should have "([^"]+)" target error$

Run Status:
^The run file "([^"]+)" should contains run status "([^"]+)"$
^No well have "([^"]+)"$

Appendix D: File Naming Conventions

Run Files

Norovirus Tests:

  • Format: {MMDDYY}-{number}-{mix}.json
  • Example: 040122-0000234-NOR1NOR2.json
  • Suffix indicates mixes: -NOR1, -NOR2, -NOR1NOR2
  • Sequential uploads use suffix: 040122-0000234B-NOR1.json, 040122-0000234C-NOR1.json

Test-Specific Files:

  • Descriptive names: RQUANTASQUAL_2439_1.json, COVID_OUTCOMES_NEW.json
  • Version suffixes: _1, _2, _II
  • Test type prefixes: MPX_, PICQUAL_, LINEAR_REGRESSION_

Configuration Files

Lab-Specific:

  • Format: {Lab} {version}.xlsx or {Lab}_{version}.xlsx
  • Examples: Viracor v3.xlsx, Quest_EZ_PP_v30.xlsx

Environment:

  • Viracor_PRE_PROD.xlsx - Pre-production
  • Viracor_PROD.xlsx - Production
  • Viracor_PPP.xlsx - Production variant

Test-Specific:

  • Exclude_from_IC_delta_check_1.xlsx - Specific feature test
  • SYSTEMIC_INHIBITION.xlsx - Inhibition test config

Behat Optimizer (Parallel Config-Consolidated Runs)

The behat-optimizer.py script (tests/scripts/behat-optimizer.py) automates parallel Behat execution by consolidating feature files that share identical configuration xlsx files.

How It Works

  1. Scan -- Walks all .feature files under tests/exports/cucumber/, extracts config file references from Given The configuration ... is loaded steps.
  2. Hash -- Computes SHA-256 of each referenced xlsx config file. Features referencing the same config (identical hash) are grouped together.
  3. Consolidate -- Generates merged .feature files with @USE_SAME_CONFIG on line 1. Background sections are inlined into each scenario. Scenario Outlines with Examples tables are preserved. Placeholder configs (e.g., from Scenario Outline <config> columns) are resolved.
  4. Run -- Distributes consolidated files across DB pool workers using LPT (Longest Processing Time) bin-packing for load balancing. Each worker gets its own DB (pcrai_test_01 through pcrai_test_10) and log file.

CLI Reference

SubcommandDescriptionKey Options
scanAnalyze config groupings and optimization potential--show-groups for detailed per-group file listing
generateCreate consolidated .feature files--dry-run preview, --permanent keep files
runGenerate + execute in parallel--workers N (default 10), --pre-migrate, --skip-migrate-fresh, --skip-list PATH, --permanent, --dir PATH, --target-version X.Y.Z
skip-jsonGenerate skip-list JSON from @SUPERSEDED_BY_* tags--init create initial file, --output PATH

Default Behavior: migrate:fresh per File

The optimizer runs migrate:fresh before each consolidated file by default. This ensures a clean database state and avoids FK constraint cascade failures that occur when leftover data from a previous file's tests conflicts with the next file's migrations.

The --skip-migrate-fresh flag opts into the old (faster) behavior where migration is skipped between files. This is useful for quick smoke tests but produces a ~73% pass rate due to FK constraint errors.

Multi-Config File Detection

The optimizer automatically detects feature files where different scenarios reference different config xlsx files. These files are split into separate virtual entries, each with its own config hash and group assignment. Previously, multi-config files were assigned to a single group based on the first config found, causing silent wrong-config failures for scenarios referencing a different config.

Commands

cd /shared/code/req_docs/code

# Full suite: V3 API + Legacy API + Browser (~30 min with pre-migrate)
python3 tests/scripts/behat-optimizer.py run --workers 10 --pre-migrate --rerun-failures \
--suite all --browser-workers 6 \
--traceability tests/catalogue/unified-traceability.json \
--render-md docusaurus/docs/traceability/str-release-test-report.md

# V3 API only (excludes legacy, ~25 min)
python3 tests/scripts/behat-optimizer.py run --workers 10 --dir tests/exports/cucumber/v3

# Run tests for a specific deployed version (excludes other version's scenarios)
python3 tests/scripts/behat-optimizer.py run --workers 10 --target-version=3.0.0

# Scan — see config groupings and optimization potential
python3 tests/scripts/behat-optimizer.py scan
python3 tests/scripts/behat-optimizer.py scan --show-groups # detailed view

# Generate consolidated files (dry-run first)
python3 tests/scripts/behat-optimizer.py generate --dry-run
python3 tests/scripts/behat-optimizer.py generate

# Generate skip-list from @SUPERSEDED_BY_* tags
python3 tests/scripts/behat-optimizer.py skip-json --init
python3 tests/scripts/behat-optimizer.py run --skip-list tests/scripts/behat-skip-list.json

Architecture Notes

  • Load balancing: LPT (Longest Processing Time first) bin-packing assigns the largest consolidated files to the least-loaded worker, minimizing total wall-clock time.
  • Per-worker logs: Each worker writes to /tmp/behat-logs/worker-NN.log for debugging.
  • JSON results: Run results are output as structured JSON for programmatic consumption.
  • Output directory: Consolidated files go to tests/exports/cucumber/_consolidated/ (gitignored). Use --permanent to keep them after a run; otherwise they are cleaned up automatically.

Skip-List JSON

The skip-list allows excluding specific feature files from a run. Generate the initial file from @SUPERSEDED_BY_* tags in the feature files:

python3 tests/scripts/behat-optimizer.py skip-json --init

This creates tests/scripts/behat-skip-list.json. Edit it manually to add or remove entries. Pass it to run:

python3 tests/scripts/behat-optimizer.py run --skip-list tests/scripts/behat-skip-list.json

Edge Cases Handled

  • Scenario Outlines with Examples: Examples tables are preserved intact during consolidation.
  • Background inlining: Background steps from source files are inlined into each scenario in the consolidated file (since the consolidated file has its own Background for config loading).
  • Placeholder config resolution: Scenario Outlines using <config> placeholders in the config step are resolved before hashing to ensure correct grouping.

Time Estimates

  • First scenario per config group: ~2 min (full config upload and migration).
  • Subsequent scenarios (same config, @USE_SAME_CONFIG): ~15 sec each.
  • Default mode (migrate:fresh per file): ~118 min wall-clock with 10 workers (768 scenarios, 99% pass rate). Slower due to MySQL DDL lock contention from 10 concurrent migrate:fresh operations (~7 min per file vs ~30s solo).
  • Fast mode (--skip-migrate-fresh): ~28 min wall-clock with 10 workers (73% pass rate). FK constraint cascade failures account for most failures.

Performance: MySQL DDL Lock Contention

When running 10 workers in default mode, each worker's migrate:fresh acquires MySQL metadata locks that block other workers' DDL operations. This serializes what should be parallel migrations, inflating per-file time from ~30s (solo) to ~7 min (10 concurrent).

Mitigation strategies:

  • Reduce workers (e.g., --workers 4) -- less contention, faster per-file, similar total wall-clock time.
  • Use --skip-migrate-fresh -- fastest, but ~27% of tests fail from stale data/FK cascades. Useful for targeted reruns where DB state is already clean.
  • Future: truncate-all approach -- replacing migrate:fresh with table truncation would avoid DDL entirely. Not yet implemented.