STD: File Import (FILEIMPORT)
Version: v1.0.0 Status: Draft SRS Source:
docusaurus/docs/srs/fileimport.mdDomain: FILEIMPORT
Overview
This document specifies tests for the File Import domain, which covers importing, validating, and processing PCR run files from thermocycler instruments via S3 folder monitoring.
Domain Characteristics:
- Primary function: Backend file ingestion and processing
- Secondary function: Data parsing, validation, and transformation
- External integration: DXAI Analyser API, S3 storage
- Data persistence: Well data, observations, traceability records
Test Method Rationale:
Per Test Plan §3.2 and §3.3, Backend Services domains use TM-API as primary method. FILEIMPORT is a backend pipeline with:
- File I/O operations (testable without UI)
- Data transformation/aggregation (backend logic)
- Pure business logic (validation rules)
- External API integration (DXAI Analyser)
TM-API is appropriate for all requirements. TM-HYB is used only where DXAI integration requires external service coordination.
Test Case Convention:
Steps describe logical actions, not UI mechanics. Use "Import file with extension .sds" or "Validate well with missing accession", not "Upload via S3 console" or "Check error column in grid". This ensures test intent survives implementation changes.
Coverage Summary
| REQ ID | Title | ACs | Tests | AC Coverage | Method | Gaps |
|---|---|---|---|---|---|---|
| REQ-FILEIMPORT-001 | Import Run Files from Monitored Folder | 7 | TC-FILEIMPORT-001, TC-FILEIMPORT-002, TC-FILEIMPORT-003 | 7/7 (100%) | TM-API | None |
| REQ-FILEIMPORT-002 | Parse Thermocycler Data to Database Variables | 6 | TC-FILEIMPORT-004 | 6/6 (100%) | TM-API | None |
| REQ-FILEIMPORT-003 | Analyze Run Data Using DXAI Analyser | 5 | TC-FILEIMPORT-005, TC-FILEIMPORT-006 | 5/5 (100%) | TM-HYB | None |
| REQ-FILEIMPORT-004 | Validate Well Data During Import | 13 | TC-FILEIMPORT-007, TC-FILEIMPORT-008, TC-FILEIMPORT-009, TC-FILEIMPORT-010, TC-FILEIMPORT-VALIDATION | 13/13 (100%) | TM-API | None |
| REQ-FILEIMPORT-005 | Parse Run File Names for Well Properties | 7 | TC-FILEIMPORT-011, TC-FILEIMPORT-012 | 7/7 (100%) | TM-API | None |
| REQ-FILEIMPORT-006 | Import Observation Properties from Run Files | 9 | TC-FILEIMPORT-013, TC-FILEIMPORT-014 | 9/9 (100%) | TM-API | None |
| REQ-FILEIMPORT-007 | Identify Crossover Wells During Import | 4 | TC-FILEIMPORT-015, TC-FILEIMPORT-016 | 4/4 (100%) | TM-API | None |
| REQ-FILEIMPORT-008 | Identify Targets Using Wildcard Matching | 3 | TC-FILEIMPORT-017 | 3/3 (100%) | TM-API | None |
| REQ-FILEIMPORT-009 | Support Prepending Fake Cycles for Analysis | 3 | TC-FILEIMPORT-018 | 3/3 (100%) | TM-API | None |
| REQ-FILEIMPORT-010 | Manage Import Folder Structure | 7 | TC-FILEIMPORT-019, TC-FILEIMPORT-020 | 7/7 (100%) | TM-API | None |
| REQ-FILEIMPORT-011 | Prevent Duplicate File Imports | 3 | TC-FILEIMPORT-021 | 3/3 (100%) | TM-API | None |
| REQ-FILEIMPORT-012 | Persist Invalid Well Data for Visibility | 3 | TC-FILEIMPORT-022 | 3/3 (100%) | TM-API | None |
| REQ-FILEIMPORT-013 | Maintain File Traceability | 3 | TC-FILEIMPORT-023 | 3/3 (100%) | TM-API | None |
Totals: 13 REQs, 73 ACs, 23 Test Cases, 100% Coverage
Test Cases
TC-FILEIMPORT-001: Automatic import trigger from monitored folder
Verifies: REQ-FILEIMPORT-001 (AC1, AC2)
Method: TM-API
Priority: Critical
Preconditions:
- S3 bucket configured with toPcrai folder
- S3 trigger configured for file arrival events
- At least one mix configured in the system
Test Data:
- Valid .sds file with standard thermocycler data
Steps:
- Place valid .sds file in toPcrai folder
- Wait for S3 trigger event
- Query import job status
Expected Results:
- AC1: Import processing begins automatically when file appears
- AC2: Import triggered via S3 trigger on toPcrai folder
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-637
TC-FILEIMPORT-002: File extension and size validation
Verifies: REQ-FILEIMPORT-001 (AC3, AC4, AC5, AC6)
Method: TM-API
Priority: High
Preconditions:
- Import service available
Test Data:
- File A: valid.sds (5 MB, valid extension)
- File B: valid.ixo (5 MB, valid extension)
- File C: invalid.csv (5 MB, invalid extension)
- File D: invalid.jpg (5 MB, invalid extension)
- File E: toolarge.sds (30 MB, exceeds limit)
Steps:
- Import File A (.sds)
- Import File B (.ixo)
- Import File C (.csv)
- Import File D (.jpg)
- Import File E (30 MB)
Expected Results:
- AC3: File A accepted (.sds extension valid)
- AC4: File B accepted (.ixo extension valid)
- AC5: File C rejected (invalid extension)
- AC5: File D rejected (invalid extension)
- AC6: File E rejected (exceeds 25 MB limit)
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-637
TC-FILEIMPORT-003: Invalid file routing to error folder
Verifies: REQ-FILEIMPORT-001 (AC7)
Method: TM-API
Priority: High
Preconditions:
- Folder structure configured
Test Data:
- File with invalid extension
Steps:
- Attempt import of file with invalid extension
- Check file location after rejection
- Verify error status recorded
Expected Results:
- AC7: Invalid file moved to error folder with appropriate status
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-637
TC-FILEIMPORT-004: Thermocycler data parsing and mapping
Verifies: REQ-FILEIMPORT-002 (AC1, AC2, AC3, AC4, AC5, AC6)
Method: TM-API
Priority: High
Preconditions:
- Valid run file with complete thermocycler data
Test Data:
- Run file with observations containing UserAnalysis.Cq values
- Target labels for instrument identification
Steps:
- Import valid run file
- Query database for parsed variables
- Verify Machine CT extraction path
- Verify JSON format conversion
- Verify extraction instrument determination
Expected Results:
- AC1: Machine CT extracted from observations[i].UserAnalysis.Cq
- AC2: Same as AC1 (path verification)
- AC3: Thermocycler readings converted to JSON format
- AC4: Plain JSON format used (not BSON)
- AC5: Extraction instrument determined from target label
- AC6: Data sent to parser API for processing
Automation Status: Automated
Jira: BT-103
TC-FILEIMPORT-005: DXAI Analyser classification and CT output
Verifies: REQ-FILEIMPORT-003 (AC1, AC2, AC3, AC4)
Method: TM-HYB
Priority: Critical
Preconditions:
- DXAI Analyser API available
- Calibration file available in S3
- Kit configuration with valid calibration URI
Test Data:
- Run with wells requiring analysis
- Target configured with S3 calibration URI
Steps:
- Import run file with observations
- Verify DXAI API called with correct parameters
- Query analysis results
Expected Results:
- AC1: dxai_cls (classification) populated from analyser output
- AC2: dxai_ct (CT value) populated from analyser output
- AC3: Calibration specified via S3 URI from kit configuration
- AC4: Readings divided by passive dye when target requires it
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-103
TC-FILEIMPORT-006: DXAI Analyser error handling
Verifies: REQ-FILEIMPORT-003 (AC5)
Method: TM-HYB
Priority: High
Preconditions:
- DXAI Analyser API can be simulated unavailable
- Invalid calibration URI can be configured
Test Data:
- Run file requiring analysis
- Invalid S3 calibration URI
Steps:
- Configure mock DXAI API as unavailable
- Attempt import
- Verify import failure with appropriate message
- Configure invalid calibration URI
- Attempt import
- Verify import failure with calibration error
Expected Results:
- AC5: Import fails when DXAI API unavailable with service error message
- AC5: Import fails when calibration URI invalid with calibration error message
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-103
TC-FILEIMPORT-007: Well validation - date and label
Verifies: REQ-FILEIMPORT-004 (AC1, AC2)
Method: TM-API
Priority: High
Preconditions:
- Import service available
Test Data:
- Well A: extraction_date = 2025-01-15, file_creation_date = 2025-01-10
- Well B: unrecognized sample_label = "INVALID_LABEL_XYZ"
Steps:
- Import file containing Well A
- Import file containing Well B
- Query well error codes
Expected Results:
- AC1: Well A has INVALID_EXTRACTION_DATE error (date later than file creation)
- AC2: Well B has SAMPLE_LABEL_IS_BAD error
Automation Status: Automated (BT-9770: 2 scenarios testing INVALID_EXTRACTION_DATE and SAMPLE_LABEL_IS_BAD)
Jira: BT-4153
TC-FILEIMPORT-008: Well validation - required patient fields
Verifies: REQ-FILEIMPORT-004 (AC3, AC4, AC5)
Method: TM-API
Priority: High
Preconditions:
- Patient sample well type configured
Test Data:
- Well A: patient sample missing accession
- Well B: patient sample missing batch
- Well C: patient sample missing specimen
- Well D: patient sample with all required fields
Steps:
- Import file with Wells A, B, C, D
- Query error codes for each well
Expected Results:
- AC3: Well A has ACCESSION_MISSING error
- AC3: Well B has EXTRACTION_BATCH_MISSING error
- AC3: Well C has SPECIMEN_MISSING error
- AC4: Test code validation applied per client configuration
- AC5: Well D passes validation (all fields present)
Automation Status: Automated (BT-9770: 3 scenarios testing ACCESSION_MISSING, EXTRACTION_BATCH_MISSING, valid patient)
Jira: BT-4153
TC-FILEIMPORT-009: Well validation - configuration matching
Verifies: REQ-FILEIMPORT-004 (AC6, AC7, AC8, AC9)
Method: TM-API
Priority: High
Preconditions:
- Mix, thermocycler, and extraction instrument configured
Test Data:
- Well A: mix = "UNKNOWN_MIX"
- Well B: thermocycler serial = "UNKNOWN_TC"
- Well C: patient sample missing extraction instrument
- Well D: passive dye readings containing zero
Steps:
- Import wells with configuration mismatches
- Query error codes
Expected Results:
- AC6: Well A has MIX_MISSING error
- AC7: Well B has THERMOCYCLER_UNKNOWN error
- AC8: Well C has EXTRACTION_INSTRUMENT_MISSING error
- AC9: Well D has INVALID_PASSIVE_READINGS error
Automation Status: Automated
Jira: BT-4153
TC-FILEIMPORT-010: Well validation - tissue weight and crossover
Verifies: REQ-FILEIMPORT-004 (AC10, AC11, AC12, AC13)
Method: TM-API
Priority: Medium
Preconditions:
- Well Label import approach configured
Test Data:
- Well A: tissue_weight = null
- Well B: tissue_weight = "hahahaha" (invalid format)
- Well C: tissue_weight = 0.0000001 (exceeds decimal precision)
- Well D: tissue_weight = 100000001 (exceeds max range)
- Well E: tissue_weight = -1.999999 (negative value)
- Well F: tissue_weight = 0.000001 (valid)
- Well G: tissue_weight = 100000000 (valid max)
- Well H: Y:XOVER label missing required T tag
- Well I: Y:XOVER label with all required tags (D, E, T, R)
Steps:
- Import wells with various tissue weight values
- Import wells with crossover labels
- Query error codes
Expected Results:
- AC10: Well A has INVALID_TISSUE_WEIGHT error
- AC10: Well B has INVALID_TISSUE_WEIGHT error
- AC10: Well C has INVALID_TISSUE_WEIGHT error
- AC10: Well D has INVALID_TISSUE_WEIGHT error
- AC10: Well E has INVALID_TISSUE_WEIGHT error
- AC11: Well F passes validation (valid format and range)
- AC11: Well G passes validation (valid max value)
- AC12: Well H has CROSSOVER_LABEL_ERROR (missing required tags)
- AC13: Well I passes validation (all required tags present)
Automation Status: Automated
TC-FILEIMPORT-011: Run file name parsing
Verifies: REQ-FILEIMPORT-005 (AC1, AC2, AC3, AC4, AC5)
Method: TM-API
Priority: High
Preconditions:
- Run File Name import approach configured
- Thermocycler with serial_number "07" configured
- Mix "Mix A" configured
Test Data:
- File name: "Mix A.07.C049.466.MLS.eds"
Steps:
- Import file with structured name
- Query parsed run properties
Expected Results:
- AC1: File name parsed following convention: mix_name.thermocycler_serial_number.protocol.batch.analyst_initials.file_extension
- AC2: Mix name extracted as "Mix A"
- AC3: Thermocycler serial number extracted as "07"
- AC4: Batch number extracted as "466"
- AC5: Thermocycler matched to configuration by serial number
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
TC-FILEIMPORT-012: Virtual sample label generation
Verifies: REQ-FILEIMPORT-005 (AC6, AC7)
Method: TM-API
Priority: Medium
Preconditions:
- Run File Name import approach
- Batch number extractable from file name
Test Data:
- File: "Mix A.07.C049.466.MLS.eds"
- Well A1: name = "Unknown"
- Well A2: name = "Unknown"
- Well A3: name = "PEC"
Steps:
- Import file with wells named "Unknown"
- Query sample labels
Expected Results:
- AC6: Well A1 sample_label = "466-A1" (virtual label generated)
- AC6: Well A2 sample_label = "466-A2" (virtual label generated)
- AC7: Virtual labels follow convention: batch-well_number
- Well A3 sample_label = "PEC" (non-Unknown wells retain original)
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-3572
TC-FILEIMPORT-013: Baseline property import
Verifies: REQ-FILEIMPORT-006 (AC1, AC2)
Method: TM-API
Priority: High
Preconditions:
- Run file contains baseline values
Test Data:
- Run file with baseline_start = 3, baseline_end = 15 at standard path
Steps:
- Import run file with baseline values
- Query observation properties
Expected Results:
- AC1: Baseline start and baseline end values imported when available
- AC2: Values extracted from paths: wells[well].channels[channel].embedded.result.custom.baseline_start/baseline_end OR baseline_from/baseline_to
Automation Status: Automated
TC-FILEIMPORT-014: Auto-baseline and other observation properties
Verifies: REQ-FILEIMPORT-006 (AC3, AC4, AC5, AC6, AC7, AC8, AC9)
Method: TM-API
Priority: High
Preconditions:
- Well Label or Meta Data import approach configured
Test Data:
- Target A: auto_baseline = "1"
- Target B: auto_baseline = "0"
- Target C: auto_baseline property missing
- Run file with concentration factor in notes
- Run file with quantity in channels
Steps:
- Import run file with various auto_baseline settings
- Import run file with concentration factor and quantity
- Query observation properties
Expected Results:
- AC3: auto_baseline property read and persisted
- AC4: auto_baseline defaults to true when property missing
- AC5: auto_baseline read from path: runs[0]->mixes[each]->channels[each]->custom->auto_baseline
- AC6: auto_baseline "1" = true (automatic), "0" = false (manual)
- AC7: Concentration factor extracted from run file notes
- AC8: Concentration factor matching is case-insensitive
- AC9: Quantity extracted from path: runs -> wells -> channels -> volume
Automation Status: Automated
Jira: BT-3899, BT-4074, BT-4093
TC-FILEIMPORT-015: Crossover well identification - Well Label approach
Verifies: REQ-FILEIMPORT-007 (AC1, AC2)
Method: TM-API
Priority: High
Preconditions:
- Well Label import approach configured
Test Data:
- Well A: label = "|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|"
- Well B: label = "PEC" (control label, non-crossover)
Steps:
- Import well with Y:XOVER tag
- Import well without Y:XOVER tag
- Query well crossover status
Expected Results:
- AC1: Well A identified as crossover well (Y:XOVER tag recognized)
- AC2: Well A has valid crossover label (contains D, E, T, R tags)
- Well B not identified as crossover
Automation Status: Automated
Jira: BT-4312
TC-FILEIMPORT-016: Crossover well role determination
Verifies: REQ-FILEIMPORT-007 (AC3, AC4)
Method: TM-API
Priority: Medium
Preconditions:
- Well Label import approach configured
Test Data:
- Well A: label = "|A:12345|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|" (with accession)
- Well B: label = "|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|" (without accession)
Steps:
- Import well with accession in crossover label
- Import well without accession in crossover label
- Query well roles
Expected Results:
- AC3: Well A identified as Sample well (has accession A tag)
- AC4: Well B identified as Crossover role (no accession A tag)
Automation Status: Automated
Jira: BT-4312
TC-FILEIMPORT-017: Wildcard target matching
Verifies: REQ-FILEIMPORT-008 (AC1, AC2, AC3)
Method: TM-API
Priority: Medium
Preconditions:
- Configuration with wildcard target names
Test Data:
- Config target A: "Target*"
- Config target B: "Target*Name"
- Run file target 1: "Target A"
- Run file target 2: "Target - Name"
Steps:
- Import run file with targets requiring wildcard matching
- Query matched target names
Expected Results:
- AC1: Asterisk (*) recognized as wildcard character
- AC2: "Target A" matches "Target*" (wildcard at end)
- AC2: "Target - Name" matches "Target*Name" (wildcard in middle)
- AC3: Matched targets use configured name (e.g., "Target*" not "Target A")
Automation Status: Automated
Jira: BT-5283
TC-FILEIMPORT-018: Prepend fake cycles for analysis
Verifies: REQ-FILEIMPORT-009 (AC1, AC2, AC3)
Method: TM-API
Priority: Medium
Preconditions:
- Target configured with prepend_cycles value
Test Data:
- Target A: prepend_cycles = 0, readings_count = 50
- Target A: prepend_cycles = 6, readings_count = 50
Steps:
- Import well with prepend_cycles = 0
- Verify DXAI analysis request readings count
- Import well with prepend_cycles = 6
- Verify DXAI analysis request readings count
Expected Results:
- AC1: Configured number of fake cycles prepended before actual readings
- AC2: Prepended cycles included in DXAI analysis request (50 vs 56 readings)
- AC3: Prepending cycles affects classification results (Neg vs Pos in test data)
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-4829
TC-FILEIMPORT-019: Folder structure - processing workflow
Verifies: REQ-FILEIMPORT-010 (AC1, AC2, AC3, AC4)
Method: TM-API
Priority: High
Preconditions:
- S3 bucket with standard folder structure
Test Data:
- Valid run file for successful import
- Invalid run file for failed import
Steps:
- Place file in toPcrai folder
- Observe file during processing
- Complete successful import
- Observe final file location
- Repeat with invalid file
- Observe error file location
Expected Results:
- AC1: Folder structure maintained: toPcrai (upload), Processing (in-progress), Problem_Files (errors), LIMS_Reports (exports)
- AC2: Files processed from toPcrai upload folder
- AC3: Files moved to Processing folder during import
- AC4: Successfully imported files archived to separate storage
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
TC-FILEIMPORT-020: Folder structure - error and archive handling
Verifies: REQ-FILEIMPORT-010 (AC5, AC6, AC7)
Method: TM-API
Priority: Medium
Preconditions:
- Archive S3 bucket configured
- LIMS export functionality available
Test Data:
- File that fails import validation
- Successful import generating LIMS export
Steps:
- Import file that fails validation
- Verify file moved to Problem_Files
- Verify archive in separate S3 bucket (not client-accessible)
- Generate LIMS export
- Verify export in LIMS_Reports folder
Expected Results:
- AC5: Failed imports moved to Problem_Files folder
- AC6: LIMS exports stored in designated reports folder
- AC7: Archive files stored in separate S3 bucket not accessible to clients
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-636
TC-FILEIMPORT-021: Duplicate file import prevention
Verifies: REQ-FILEIMPORT-011 (AC1, AC2, AC3)
Method: TM-API
Priority: High
Preconditions:
- File successfully imported previously
Test Data:
- File A: previously imported
- File B: new file (not imported)
Steps:
- Attempt to import File A again
- Verify rejection
- Import File B
- Verify File B succeeds
Expected Results:
- AC1: File A import rejected (duplicate detected)
- AC2: Duplicate error indication provided
- AC3: File B imports successfully (duplicate detection doesn't block different files)
Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)
Jira: BT-1322
TC-FILEIMPORT-022: Invalid well data persistence
Verifies: REQ-FILEIMPORT-012 (AC1, AC2, AC3)
Method: TM-API
Priority: Medium
Preconditions:
- Import service available
Test Data:
- Well with invalid values:
- accession = "INVALID_ACC_123!"
- extraction_date = "2099-01-01"
- tissue_weight = "not_a_number"
- batch_number = "INVALID_BATCH"
Steps:
- Import well with invalid values
- Query well data from database
- Verify invalid values are retrievable
Expected Results:
- AC1: Invalid values stored even when they fail validation
- AC2: Invalid values retrievable for display (query returns original values)
- AC3: User can identify what needs correction (error codes paired with original values)
Automation Status: Automated (BT-9771: 2 scenarios testing error well data persistence with original values retrievable)
Jira: BT-5525
TC-FILEIMPORT-023: File traceability preservation
Verifies: REQ-FILEIMPORT-013 (AC1, AC2, AC3)
Method: TM-API
Priority: Medium
Preconditions:
- Import service available
Test Data:
- File: "MyRun_20250115.sds"
- User: test_user@example.com
Steps:
- Import file as authenticated user
- Query run record
- Verify original file name preserved
- Verify user information preserved
Expected Results:
- AC1: Processed files contain original file name "MyRun_20250115.sds"
- AC2: Original file name preserved for audit purposes
- AC3: User information preserved along with file information
Automation Status: Automated (BT-9771: 1 scenario testing run_name preservation and run status queryability)
Jira: [Pending - requires investigation]
Supplementary Gap-Fill Tests
| TC | Description | Covers |
|---|---|---|
| TC-FILEIMPORT-VALIDATION | Import validation: missing test code tag (C) prevents rule execution | REQ-FILEIMPORT-004: Well validation - blocking error on missing tag |
Gap Analysis
No gaps identified. All 73 acceptance criteria have test coverage (11 TCs via manual S3/DXAI pipeline testing).
Coverage by AC Type
| AC Category | Count | Covered | Notes |
|---|---|---|---|
| Trigger/Automation | 2 | 2 | S3 trigger behavior — manual pipeline test |
| File Validation | 5 | 5 | Extension, size limits — manual pipeline test |
| Data Parsing | 15 | 15 | JSON extraction, property mapping — Behat API |
| Well Validation | 13 | 13 | Error codes for all validation failures — Behat API |
| External Integration | 6 | 6 | DXAI API, calibration — manual pipeline test |
| Folder Management | 7 | 7 | S3 folder routing — manual pipeline test |
| Error Handling | 12 | 12 | All error codes and persistence — Behat API |
| Traceability | 3 | 3 | File name and user preservation — Behat API |
| Special Features | 10 | 10 | Crossover, wildcard — Behat API; prepend_cycles — manual pipeline test |
Traceability to Existing Tests
| Test Case | Jira Test | Automation |
|---|---|---|
| TC-FILEIMPORT-001, TC-FILEIMPORT-002, TC-FILEIMPORT-003 | BT-637 | Manual (S3 pipeline) |
| TC-FILEIMPORT-004 | BT-103 | Behat API |
| TC-FILEIMPORT-005, TC-FILEIMPORT-006 | BT-103 | Manual (DXAI pipeline) |
| TC-FILEIMPORT-007, TC-FILEIMPORT-008, TC-FILEIMPORT-009, TC-FILEIMPORT-010 | BT-4153, BT-4312, BT-4008 | Behat API |
| TC-FILEIMPORT-011, TC-FILEIMPORT-012 | BT-3572, BT-3625 | Manual (S3 pipeline) |
| TC-FILEIMPORT-013, TC-FILEIMPORT-014 | BT-3601, BT-3899, BT-4074, BT-3952, BT-4093 | Behat API |
| TC-FILEIMPORT-015, TC-FILEIMPORT-016 | BT-4312 | Behat API |
| TC-FILEIMPORT-017 | BT-5283 | Behat API |
| TC-FILEIMPORT-018 | BT-4829 | Manual (DXAI pipeline) |
| TC-FILEIMPORT-019, TC-FILEIMPORT-020 | BT-637, BT-636 | Manual (S3 pipeline) |
| TC-FILEIMPORT-021 | BT-1322 | Manual (S3 pipeline) |
| TC-FILEIMPORT-022 | BT-5525 | Behat API |
| TC-FILEIMPORT-023 | Pending | Behat API |
Notes
- FILEIMPORT is a backend-focused domain with no direct user interaction
- All requirements are testable at the API/service boundary
- TM-HYB used only for DXAI integration tests requiring external service coordination
- REQ-FILEIMPORT-013 (File Traceability) missing Jira reference - flagged for investigation
- Existing Gherkin tests in SRS align with test cases defined here
- Validation error codes are comprehensively tested across TC-007 through TC-010