File Import (API) Requirements
Version: v3.0.0 Status: Normative (text), Illustrative (diagrams only) Scope: Importing, validating, and processing PCR run files from thermocycler instruments Domain: FILEIMPORT
Statement
The system shall automatically import, validate, and process PCR run files from thermocycler instruments when they appear in a monitored S3 folder. This includes parsing thermocycler data formats (.sds, .ixo), extracting well properties and observations, integrating with the DXAI Analyser for classification and CT determination, and validating all well data against configured rules.
The import pipeline maintains a defined folder structure for processing, error handling, and archival. It enforces duplicate detection, preserves file traceability for audit purposes, and persists both valid and invalid data so users can identify and correct problems. Special handling exists for crossover wells, wildcard target matching, and prepending fake cycles for analysis.
Behavior Overview (Illustrative)
This diagram illustrates the high-level import workflow. It does not specify UI layout, styling, or interaction details.
Definitions
| Term | Definition |
|---|---|
| Runfile | A data file exported from a thermocycler containing well data, readings, and target information |
| DXAI Analyser | External API that performs classification (dxai_cls) and CT (dxai_ct) analysis on observation data |
| Mix | A configured combination of targets and dyes for an assay |
| Crossover Well | A well identified by the Y:XOVER tag in well label, used for cross-contamination detection |
| Prepend Cycles | Fake reading cycles added before actual readings to help identify premature amplification |
| Well Label Import Approach | Import method where well properties are extracted from structured label tags in the well name |
| Virtual Sample Label | Auto-generated label (batch-well_number) for patient wells with name "Unknown" |
| Calibration | Target-specific calibration data stored in S3 and referenced by URI |
Assumptions
- Thermocycler instruments produce files in supported formats (.sds, .ixo)
- S3 storage is configured with required folder structure (toPcrai, Processing, Problem_Files, etc.)
- DXAI Analyser API is available for analysis requests
- At least one mix is configured before import is attempted
- Calibration files are available in the configured S3 bucket
- File system triggers (S3 triggers) are configured for the toPcrai folder
- Auto_baseline property is only available in Well Label and Meta Data import approaches
- Crossover well identification is only supported for Well Label import approach
- Meta Data and Run File Name approaches do not support crossover identification
Functional Requirements
File Ingestion (REQ-FILEIMPORT-001, REQ-FILEIMPORT-010, REQ-FILEIMPORT-011, REQ-FILEIMPORT-013)
FR-FILEIMPORT-001 Import Run Files from Monitored Folder
The system shall automatically import run files when they appear in the designated monitored folder.
Acceptance Criteria:
Trigger Behavior:
- The system shall trigger import automatically when a file appears in the monitored folder
- The system shall trigger import via S3 triggers on the toPcrai folder
File Validation:
- The system shall accept files with .sds and .ixo extensions
- The system shall accept only valid file extensions: .sds, .ixo
- The system shall reject files with invalid extensions
- The system shall reject files exceeding the configured maximum size
- The system shall enforce a maximum file size of 25 MB (configurable)
- The system shall validate that files contain expected data structure
Error Routing:
- The system shall route invalid files to the error folder with appropriate status
Error Handling:
- Invalid file extension: The system shall reject the file and move it to the error folder
- File exceeds size limit: The system shall reject the file with "File too large" error
- File lacks expected data: The system shall reject the file and move it to the error folder with status message
Trace: Source: 3.0.0-File Import (API) (Rows 23, 24, 25, 26) | Jira: BT-637 | Tests: See scenarios
FR-FILEIMPORT-010 Manage Import Folder Structure
The system shall maintain a defined folder structure for file import, processing, error handling, and archival operations.
Acceptance Criteria:
Folder Structure:
- The system shall maintain the folder structure: User/Runs/toPcrai (upload), User/Runs/Processing (in-progress), User/Runs/Problem_Files (errors), User/LIMS_Reports (exports)
- The system shall store archive files in a separate S3 bucket not accessible to clients
File Routing:
- The system shall process files from the designated upload folder (toPcrai)
- The system shall move files to a processing folder during import
- The system shall archive successfully imported files to a separate storage location
- The system shall move failed imports to a problem files folder
- The system shall store LIMS export files in the designated reports folder
Trace: Source: 3.0.0-File Import (API) (Rows 17-22) | Jira: BT-637, BT-636 | Tests: See scenarios
FR-FILEIMPORT-011 Prevent Duplicate File Imports
The system shall detect and prevent the import of files that have already been imported.
Acceptance Criteria:
- The system shall reject import of a previously imported file
- The system shall provide an error indication when duplicate is detected
- Duplicate detection shall not affect processing of subsequent different files
Error Handling:
- Duplicate file detected: The system shall reject the import and display duplicate error message
Trace: Source: 3.0.0-File Import (API) (Row 16) | Jira: BT-1322 | Tests: See scenarios
FR-FILEIMPORT-013 Maintain File Traceability
The system shall maintain traceability by preserving the original file name throughout import processing.
Acceptance Criteria:
- All processed files shall contain the original file name
- Original file name shall be preserved for audit purposes
- User information shall be preserved along with file information
Trace: Source: 3.0.0-File Import (API) (Row 27) | Jira: [Missing - requires investigation] | Tests: See scenarios
Data Parsing (REQ-FILEIMPORT-002, REQ-FILEIMPORT-005, REQ-FILEIMPORT-006)
FR-FILEIMPORT-002 Parse Thermocycler Data to Database Variables
The system shall parse imported run file data and populate database variables from the thermocycler data fields.
Acceptance Criteria:
Data Extraction:
- The system shall extract Machine CT from the designated JSON path
- The system shall extract Machine CT from the path: observations[i].UserAnalysis.Cq
- The system shall convert thermocycler readings to JSON format
- The system shall use plain JSON format for data (BSON format deprecated)
Processing:
- The system shall determine extraction instrument from target label
- The system shall send data to the parser API for processing
Trace: Source: 3.0.0-File Import (API) (Rows 1, 13), 3.0.0-Api Fixes For Result Analysis | Jira: BT-103 | Tests: See scenarios | Related: External field mapping documentation
FR-FILEIMPORT-005 Parse Run File Names for Well Properties
The system shall parse run file names to extract well properties when using the Run File Name import approach.
Acceptance Criteria:
File Name Parsing:
- The system shall parse file names following the convention: mix_name.thermocycler_serial_number.protocol.batch.analyst_initials.file_extension
- The system shall extract mix name from file name
- The system shall extract thermocycler serial number from file name
- The system shall extract batch number from file name
Matching and Generation:
- The system shall match thermocycler to configuration by serial number
- The system shall generate virtual sample labels for patient wells with name "Unknown"
- The system shall generate virtual labels following the convention: batch-well_number (e.g., "466-A1")
Trace: Source: 3.0.0-Run File Parser (Rows 1, 2) | Jira: BT-3572 | Tests: BT-3625, See scenarios
FR-FILEIMPORT-006 Import Observation Properties from Run Files
The system shall extract and persist observation-level properties from run files during import.
Acceptance Criteria:
Baseline Properties:
- The system shall import baseline start and baseline end values when available
- The system shall extract baseline values from paths: wells[well].channels[channel].embedded.result.custom.baseline_start/baseline_end OR baseline_from/baseline_to
Auto-Baseline Property:
- The system shall read and persist the auto_baseline property
- Auto_baseline shall default to true if property does not exist
- The system shall read auto_baseline from the path: runs[0]->mixes[each]->channels[each]->custom->auto_baseline
- The system shall interpret auto_baseline values as: "1" = true (automatic), "0" = false (manual)
Other Properties:
- The system shall extract concentration factor from run file notes
- The system shall perform concentration factor matching in a case-insensitive manner
- The system shall extract quantity from the designated JSON path
- The system shall extract quantity from the path: runs -> wells -> channels -> volume
Trace: Source: 3.0.0-Import Baseline Start End From Run File (Row 1), 3.0.0-Run Import - Parsed Json (Row 1), 3.0.0-Machine File Parser - Read Quantity (Row 1), 3.0.0-Run Import - Read Automatic Baseline Check Property (Rows 1, 2) | Jira: BT-3601, BT-3899, BT-4074 | Tests: BT-3952, BT-4093, See scenarios
Validation (REQ-FILEIMPORT-004, REQ-FILEIMPORT-012)
FR-FILEIMPORT-004 Validate Well Data During Import
The system shall validate each well's data during import and generate appropriate error codes when validation fails.
Acceptance Criteria:
Date and Label Validation:
- The system shall validate extraction date is not later than file creation date
- The system shall validate sample labels are recognized
Required Field Validation:
- The system shall validate required fields for patient samples (accession, batch, specimen)
- The system shall validate test codes against configuration when applicable
- The system shall apply test code validation based on client-specific sample type configuration
Configuration Matching:
- The system shall validate mix presence and matching
- The system shall validate thermocycler matches configuration
- The system shall validate extraction instrument for patient samples
Data Format Validation:
- The system shall validate passive dye readings contain no zeros
- The system shall validate tissue weight format and range when present
- The system shall validate tissue weight as an optional field with decimal value, max 6 decimal digits, range 0 to 100000000
Crossover Validation:
- The system shall validate crossover label structure (Y:XOVER requires D, E, T, R tags)
Error Display:
- Wells with validation errors shall display error outcome in run report
Error Handling:
- Extraction date > file creation: The system shall assign INVALID_EXTRACTION_DATE error
- Unrecognized sample label: The system shall assign SAMPLE_LABEL_IS_BAD error
- Missing accession on patient sample: The system shall assign ACCESSION_MISSING error
- Missing batch on patient sample: The system shall assign EXTRACTION_BATCH_MISSING error
- Missing specimen on patient sample: The system shall assign SPECIMEN_MISSING error
- Missing test code when required: The system shall assign TESTCODE_MISSING error
- Unknown test code: The system shall assign UNKNOWN_TESTCODE error
- Test code doesn't match mix: The system shall assign MIX_DIDNT_MATCH_TC error
- Missing mix: The system shall assign MIX_MISSING error
- Unknown thermocycler: The system shall assign THERMOCYCLER_UNKNOWN error
- Missing extraction instrument: The system shall assign EXTRACTION_INSTRUMENT_MISSING error
- Zero in passive dye readings: The system shall assign INVALID_PASSIVE_READINGS error
- Invalid tissue weight: The system shall assign INVALID_TISSUE_WEIGHT error
- Invalid crossover label: The system shall assign CROSSOVER_LABEL_ERROR error
Trace: Source: 3.0.0-File Import (API) (Rows 3-15), 3.0.0-Run Import - Tissue Weight Validation (Row 1), 3.0.0-Import Wells Identification - Crossover Wells (Row 3) | Jira: BT-4153, BT-4312 | Tests: BT-4008, See scenarios
FR-FILEIMPORT-012 Persist Invalid Well Data for Visibility
The system shall persist invalid well data values so users can view and correct problematic data.
Acceptance Criteria:
- The system shall store invalid values even when they fail validation
- Invalid values shall be retrievable for display
- Users shall be able to identify what needs correction
Trace: Source: 3.0.0-Persist Invalid Well Data for Visibility (Row 1) | Jira: BT-5525 | Tests: See scenarios
Analysis Integration (REQ-FILEIMPORT-003, REQ-FILEIMPORT-009)
FR-FILEIMPORT-003 Analyze Run Data Using DXAI Analyser
The system shall send run data to the DXAI analyser API to determine classification and CT values for each observation.
Acceptance Criteria:
Analysis Output:
- The system shall obtain classification (dxai_cls) from analyser output
- The system shall obtain CT value (dxai_ct) from analyser output
Data Preparation:
- The system shall specify calibration via S3 URI from kit configuration
- The system shall divide readings by passive dye when target requires it
- The system shall send readings in JSON format
Error Handling:
- Analyser API unavailable: The system shall fail the import and display an error indicating the analysis service is unavailable
- Invalid calibration URI: The system shall fail the import and display an error indicating invalid or missing calibration data
Open Questions (Resolved):
| ID | Question | Resolution |
|---|---|---|
| OQ-01 | What should happen when the DXAI analyser API is unavailable? | Fail import |
| OQ-02 | What should happen when a calibration URI is invalid or calibration file is missing? | Fail import |
Trace: Source: 3.0.0-File Import (API) (Row 2), 3.0.0-Api Fixes For Result Analysis | Jira: BT-103 | Tests: See scenarios | Related: External DXAI documentation
FR-FILEIMPORT-009 Support Prepending Fake Cycles for Analysis
The system shall support prepending fake reading cycles when calling the DXAI analyser to enable detection of premature amplification.
Acceptance Criteria:
- The system shall prepend configured number of cycles before actual readings
- Prepended cycles shall be included in analysis request to DXAI
- Prepending cycles may affect classification results
Trace: Source: 3.0.0-Fake Reading Cycles for DXAI Analyser (Row 1) | Jira: BT-4829 | Tests: See scenarios
Well Identification (REQ-FILEIMPORT-007, REQ-FILEIMPORT-008)
FR-FILEIMPORT-007 Identify Crossover Wells During Import
The system shall identify crossover wells based on structured label tags when using the Well Label import approach.
Acceptance Criteria:
Tag Recognition:
- The system shall recognize Y:XOVER tag as indicator of potential crossover well
- Valid crossover labels shall contain D, E, T, and R tags
Role Determination:
- Wells with crossover label containing accession (A tag) shall be identified as Sample wells
- Wells with crossover label without accession shall be identified as Crossover role
Trace: Source: 3.0.0-Import Wells Identification - Crossover Wells (Rows 1-5) | Jira: BT-4312 | Epic: BT-4307 | Tests: See scenarios
FR-FILEIMPORT-008 Identify Targets Using Wildcard Matching
The system shall support wildcard characters when matching run file targets to configured targets.
Acceptance Criteria:
- The system shall recognize asterisk (*) as wildcard character
- Wildcard shall match any characters in target name
- Matched target shall use the configured target name (including wildcard)
Trace: Source: 3.0.0-Identify WildCard Target Names (Row 1) | Jira: BT-5283 | Epic: BT-5282 | Tests: See scenarios
Configuration Options
| Option | Default | Description | Affects |
|---|---|---|---|
| max_file_size_mb | 25 | Maximum file size for import | REQ-FILEIMPORT-001 |
| valid_extensions | .sds, .ixo | Accepted file extensions | REQ-FILEIMPORT-001 |
| prepend_cycles | 0 | Number of fake cycles to prepend per target | REQ-FILEIMPORT-009 |
| calibration_s3_bucket | chill-rabbit-calibrations | S3 bucket for calibration files | REQ-FILEIMPORT-003 |
| well_import_approach | Well Label | Import approach (Well Label, Meta Data, Run File Name) | REQ-FILEIMPORT-005, REQ-FILEIMPORT-006, REQ-FILEIMPORT-007 |
UI Notes (Illustrative)
FR-FILEIMPORT-001 UI Specifications
- Files are uploaded to the toPcrai folder via S3 interface or application UI
- Invalid files are moved to Problem_Files/error folder
FR-FILEIMPORT-004 UI Specifications
- Error codes are displayed in the well outcome column of the run report
- Users can fix some errors by editing the well; others require re-import
FR-FILEIMPORT-011 UI Specifications
- Duplicate error message is displayed in the Status column of the Upload Runs screen table
FR-FILEIMPORT-012 UI Specifications
- Invalid values are displayed in their respective UI locations:
- accession
- extraction date
- extraction instrument
- tissue_weight
- crossover_role_alias
- batch_number
- testcode_name
- quantity_multiplier
Implementation (Illustrative)
| Component | Type | Path | Requirements |
|---|---|---|---|
| RunDataParseJob | Job | App\Jobs\RunDataParseJob | REQ-FILEIMPORT-001, REQ-FILEIMPORT-002, REQ-FILEIMPORT-011 |
| RunStoreJob | Job | App\Jobs\RunStoreJob | REQ-FILEIMPORT-001, REQ-FILEIMPORT-010 |
| RunDataParseAction | Action | App\Actions\RunDataParseAction | REQ-FILEIMPORT-002, REQ-FILEIMPORT-011 |
| StoreRunFilesAction | Action | App\Actions\RunFiles\StoreRunFilesAction | REQ-FILEIMPORT-001, REQ-FILEIMPORT-010 |
| DispatchNextFileToParsingAction | Action | App\Actions\RunFiles\DispatchNextFileToParsingAction | REQ-FILEIMPORT-001 |
| RetryFailedRunfilesAction | Action | App\Actions\RunFiles\RetryFailedRunfilesAction | REQ-FILEIMPORT-001 |
| CreateRunAction | Action | App\Actions\Runs\CreateRunAction | REQ-FILEIMPORT-002, REQ-FILEIMPORT-003, REQ-FILEIMPORT-004 |
| ParsedJson | Service | App\RunFileConverter\ParsedJson | REQ-FILEIMPORT-002, REQ-FILEIMPORT-003, REQ-FILEIMPORT-005, REQ-FILEIMPORT-006 |
| WildCardTargetNameMatcher | Service | App\RunFileConverter\Support\WildCardTargetNameMatcher | REQ-FILEIMPORT-008 |
| GetAutomaticBaselineFromMixChannel | Service | App\RunFileConverter\GetAutomaticBaselineFromMixChannel | REQ-FILEIMPORT-006 |
| RunFileImported | Event | App\Events\RunFileImported | REQ-FILEIMPORT-013 |
| RunFileImportProgressBroadcast | Event | App\Events\RunFileImportProgressBroadcast | REQ-FILEIMPORT-001 |
Traceability Matrix
| Requirement | Title | Verification | Implementation | Test Cases | Status |
|---|---|---|---|---|---|
| REQ-FILEIMPORT-001 | Import Run Files from Monitored Folder | Test | RunDataParseJob, StoreRunFilesAction | [Pending] | Draft |
| REQ-FILEIMPORT-002 | Parse Thermocycler Data to Database Variables | Test | RunDataParseAction, ParsedJson | [Pending] | Draft |
| REQ-FILEIMPORT-003 | Analyze Run Data Using DXAI Analyser | Test | CreateRunAction, ParsedJson | [Pending] | Draft |
| REQ-FILEIMPORT-004 | Validate Well Data During Import | Test | CreateRunAction | BT-4008 | Draft |
| REQ-FILEIMPORT-005 | Parse Run File Names for Well Properties | Test | ParsedJson | BT-3625 | Draft |
| REQ-FILEIMPORT-006 | Import Observation Properties from Run Files | Test | ParsedJson, GetAutomaticBaselineFromMixChannel | BT-3952, BT-4093 | Draft |
| REQ-FILEIMPORT-007 | Identify Crossover Wells During Import | Test | ParsedJson | [Pending] | Draft |
| REQ-FILEIMPORT-008 | Identify Targets Using Wildcard Matching | Test | WildCardTargetNameMatcher | [Pending] | Draft |
| REQ-FILEIMPORT-009 | Support Prepending Fake Cycles for Analysis | Test | ParsedJson | [Pending] | Draft |
| REQ-FILEIMPORT-010 | Manage Import Folder Structure | Test | StoreRunFilesAction, RunStoreJob | [Pending] | Draft |
| REQ-FILEIMPORT-011 | Prevent Duplicate File Imports | Test | RunDataParseAction | [Pending] | Draft |
| REQ-FILEIMPORT-012 | Persist Invalid Well Data for Visibility | Test | CreateRunAction | [Pending] | Draft |
| REQ-FILEIMPORT-013 | Maintain File Traceability | Test | RunFileImported | [Pending] | Draft |
Notes
- Machine CT is populated from observations[i].UserAnalysis.Cq
- External field mapping documentation: https://docs.google.com/document/d/1PXD826Ba-Kb2qvYR9IWpZSat_w9bQJqJvkOIYgJ6pL4/
- External DXAI documentation: https://docs.google.com/document/d/1sT94jf82al3HZis1PsDDJlssodoTPrTeCUs8aqLtwB0/
- Calibrations stored in "chill-rabbit-calibrations" S3 bucket
- BSON format has been replaced with plain JSON for readings
- Legacy file name parsing example: HPV06.07.C049.466.MbrLS.eds parses to mix_name: HPV06, thermocycler_serial_number: 07, protocol: C049, batch: 466, analyst_initials: MLS
Open Questions
| ID | Question | Source | Owner | Date Raised |
|---|---|---|---|---|
| OQ-001 | Jira reference missing for REQ-FILEIMPORT-013 (File Traceability) - requires investigation | Traceability | @SME-TBD | TBD |
Acceptance Tests
Test: REQ-FILEIMPORT-001
Test: Valid file import
Given: A valid .sds or .ixo file is placed in the toPcrai folder
When: The system detects the file
Then: Import processing begins automatically
And: File is moved to Processing folder during import
Test: Invalid extension rejected
Given: A file with invalid extension (e.g., .jpg, .csv) is placed in toPcrai
When: The system detects the file
Then: File is rejected and moved to error folder
And: Error status is displayed
Test: File size limit
Given: A file larger than 25 MB is placed in toPcrai
When: The system attempts to import
Then: File is rejected with appropriate error message
Test: REQ-FILEIMPORT-002
Test: Data mapping
Given: A run file with thermocycler data
When: File is imported
Then: Database variables are populated from JSON fields
And: Machine CT is extracted from observations[i].UserAnalysis.Cq
Test: REQ-FILEIMPORT-003
Test: Classification and CT
Given: Run data is sent to analyser API
When: Analysis completes
Then: dxai_cls and dxai_ct are populated from analyser output
Test: Calibration URI
Given: Target has S3 URI configured for calibration
When: Analysis is requested
Then: Calibration is loaded from the configured S3 URI
Test: REQ-FILEIMPORT-004
Test: Extraction date validation
Given: Well with extraction date later than file creation date
When: Imported
Then: Well shows INVALID_EXTRACTION_DATE error
Test: Sample label validation
Given: Well with unrecognised sample label
When: Imported
Then: Well shows SAMPLE_LABEL_IS_BAD error
Test: Tissue weight validation - invalid
Given: Well Label import approach
And: well_label contains W:null or W:hahahaha or W:0.0000001 or W:100000001 or W:-1.999999
When: Import the well
Then: Well has INVALID_TISSUE_WEIGHT error code
Test: Tissue weight validation - valid
Given: Well Label import approach
And: well_label contains W:0, W:0.000001, W:0.1, W:1, W:100000000
When: Import the well
Then: Well does not have tissue weight error
Test: Crossover label validation
Given: Well Label import approach
And: Label contains Y:XOVER but missing T, E, D, or R tag
When: Import the well
Then: Well has CROSSOVER_LABEL_ERROR
Test: REQ-FILEIMPORT-005
Test: File name parsing
Given: Configuration has thermocycler with serial_number: 07
And: Target with mix: Mix A
And: Run file name: Mix A.07.C049.466.MLS.eds
When: Import the runfile
Then: Run is created with thermocycler serial_number: 07
And: Wells have mix: Mix A
Test: Virtual sample label
Given: Run file name: Mix A.07.C049.466.MLS.eds
And: Wells with name "Unknown" at A1, A2
And: Well with name "PEC" at A3
When: Import the run file
Then: Well A1 has sample_label: 466-A1
And: Well A2 has sample_label: 466-A2
And: Well A3 has sample_label: PEC
Test: REQ-FILEIMPORT-006
Test: Baseline import
Given: Run file with baseline_start and baseline_end values
When: Import the file
Then: Each observation contains baseline start and baseline end
Test: Auto baseline property - true
Given: Run file with auto_baseline: "1" for Target A
When: Import the runfile
Then: Run target Target A has auto_baseline: true
Test: Auto baseline property - false
Given: Run file with auto_baseline: "0" for Target A
When: Import the runfile
Then: Run target Target A has auto_baseline: false
Test: Auto baseline property - default
Given: Run file without auto_baseline property for Target A
When: Import the runfile
Then: Run target Target A has auto_baseline: true (default)
Test: REQ-FILEIMPORT-007
Test: Crossover in Well Label approach
Given: Client config Well Properties Import Approach: Label
And: Well label: |T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|
When: Import well
Then: The well is considered as a 'crossover' well
Test: Crossover not supported in other approaches
Given: Client config Well Properties Import Approach: Meta Data or Run File Name
And: Control label mappings with role_a
When: Import well with label: role_a
Then: The well is not considered as 'crossover'
And: The well is considered as Role A well
Test: Crossover with accession
Given: Well label: |A:12345|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|
When: Import well
Then: Well is identified as a Sample well (not Crossover role)
Test: Crossover without accession
Given: Well label without A tag: |T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|
When: Import well
Then: Well is identified as a Crossover well
Test: REQ-FILEIMPORT-008
Test: Wildcard matching - suffix
Given: Configuration target: Target*
And: Run file target: Target A
When: Import
Then: Runfile target is identified as: Target*
Test: Wildcard matching - infix
Given: Configuration target: Target*Name
And: Run file target: Target - Name
When: Import
Then: Runfile target is identified as: Target*Name
Test: REQ-FILEIMPORT-009
Test: Fake cycles affect classification - without prepend
Given: Well with observation target: Target A, readings_count: 50
And: Target A prepend cycles: 0
When: Analysis runs
Then: Analysed observation dxai_cls: Neg
Test: Fake cycles affect classification - with prepend
Given: Well with observation target: Target A, readings_count: 50
And: Target A prepend cycles: 6
When: Analysis runs
Then: Analysed observation dxai_cls: Pos
Test: REQ-FILEIMPORT-010
Test: Folder routing
Given: Standard folder structure is configured
When: File is placed in User/Runs/toPcrai
Then: File is processed
And: Successfully imported files are archived to Archive bucket
And: Failed files are moved to Problem_Files folder
Test: REQ-FILEIMPORT-011
Test: Duplicate file rejected
Given: Run file A has already been imported
When: User attempts to import Run file A again
Then: Import is rejected
And: Error message displayed in Upload Runs Status column
Test: REQ-FILEIMPORT-012
Test: Invalid values displayed
Given: Well with invalid values for:
- accession
- extraction date
- extraction instrument
- tissue_weight
- crossover_role_alias
- batch_number
- testcode_name
- quantity_multiplier
When: Import this well
Then: Invalid values are shown in the UI in their respective places
Test: REQ-FILEIMPORT-013
Test: Original filename preserved
Given: Run file "MyRun.sds" is imported
When: Processing creates intermediate files
Then: Each created file contains "MyRun" in the filename
And: Original filename is available for audit purposes
Related Design Documents
| Design Document | Relevant Sections |
|---|---|
| SDD Algorithms | Run Import Process, Parse Run File, Run Analysis |
Appendix: Process Artifacts
Completion Checklist
- All requirements are capability-level (describe behavior, not UI)
- Requirement variants consolidated (no requirement explosion)
- UI details are fully demoted to Illustrative section
- Configuration options are not encoded as requirements
- Acceptance criteria describe behavior, not UI mechanics
- Every requirement has acceptance criteria and source traceability
- Error handling addressed for I/O, validation, and external system requirements
- Open questions documented with owners assigned
- Consolidations documented in Reviewer Notes with reversibility info
- Module can survive a full UI redesign unchanged
- Refinements folded into acceptance criteria
- Traceability matrix is complete
Reviewer Notes
Source Format Assessment:
The source file used Format A (simple anchor with {#REQ-XXX-NNN} syntax). All 13 functional requirements have been converted.
Consolidation Assessment: No consolidation was performed. The 13 functional requirements represent distinct capabilities in the file import pipeline:
| Requirement | Capability | Assessment |
|---|---|---|
| REQ-FILEIMPORT-001 | Monitored folder import | Distinct - file ingestion trigger |
| REQ-FILEIMPORT-002 | Data parsing | Distinct - data transformation |
| REQ-FILEIMPORT-003 | DXAI analysis | Distinct - external API integration |
| REQ-FILEIMPORT-004 | Well validation | Distinct - validation engine |
| REQ-FILEIMPORT-005 | File name parsing | Distinct - alternative import approach |
| REQ-FILEIMPORT-006 | Observation properties | Distinct - data extraction |
| REQ-FILEIMPORT-007 | Crossover identification | Distinct - specialized well type |
| REQ-FILEIMPORT-008 | Wildcard matching | Distinct - target matching logic |
| REQ-FILEIMPORT-009 | Prepend cycles | Distinct - analysis modification |
| REQ-FILEIMPORT-010 | Folder structure | Distinct - file routing |
| REQ-FILEIMPORT-011 | Duplicate prevention | Distinct - import guard |
| REQ-FILEIMPORT-012 | Invalid data visibility | Distinct - error display |
| REQ-FILEIMPORT-013 | File traceability | Distinct - audit trail |
Rationale: These requirements represent separate system responsibilities in the import pipeline, not variants of a single capability. Each addresses a distinct concern (ingestion, parsing, validation, analysis, identification, etc.).
Non-Functional Requirements: NFR-FILEIMPORT-001 (Performance) and NFR-FILEIMPORT-002 (Reliability) from the source have been noted but not converted to formal requirements as they lacked specific testable criteria. These should be reviewed and converted if quantifiable metrics can be established.
Reversibility: To restore original structure, reference:
- Source:
output/srs/fileimport.md - Confluence: 3.0.0-File Import (API)