Skip to main content
Version: 3.0.1

STD: File Import (FILEIMPORT)

Version: v1.0.0 Status: Draft SRS Source: docusaurus/docs/srs/fileimport.md Domain: FILEIMPORT


Overview

This document specifies tests for the File Import domain, which covers importing, validating, and processing PCR run files from thermocycler instruments via S3 folder monitoring.

Domain Characteristics:

  • Primary function: Backend file ingestion and processing
  • Secondary function: Data parsing, validation, and transformation
  • External integration: DXAI Analyser API, S3 storage
  • Data persistence: Well data, observations, traceability records

Test Method Rationale:

Per Test Plan §3.2 and §3.3, Backend Services domains use TM-API as primary method. FILEIMPORT is a backend pipeline with:

  • File I/O operations (testable without UI)
  • Data transformation/aggregation (backend logic)
  • Pure business logic (validation rules)
  • External API integration (DXAI Analyser)

TM-API is appropriate for all requirements. TM-HYB is used only where DXAI integration requires external service coordination.

Test Case Convention:

Steps describe logical actions, not UI mechanics. Use "Import file with extension .sds" or "Validate well with missing accession", not "Upload via S3 console" or "Check error column in grid". This ensures test intent survives implementation changes.


Coverage Summary

REQ IDTitleACsTestsAC CoverageMethodGaps
REQ-FILEIMPORT-001Import Run Files from Monitored Folder7TC-FILEIMPORT-001, TC-FILEIMPORT-002, TC-FILEIMPORT-0037/7 (100%)TM-APINone
REQ-FILEIMPORT-002Parse Thermocycler Data to Database Variables6TC-FILEIMPORT-0046/6 (100%)TM-APINone
REQ-FILEIMPORT-003Analyze Run Data Using DXAI Analyser5TC-FILEIMPORT-005, TC-FILEIMPORT-0065/5 (100%)TM-HYBNone
REQ-FILEIMPORT-004Validate Well Data During Import13TC-FILEIMPORT-007, TC-FILEIMPORT-008, TC-FILEIMPORT-009, TC-FILEIMPORT-010, TC-FILEIMPORT-VALIDATION13/13 (100%)TM-APINone
REQ-FILEIMPORT-005Parse Run File Names for Well Properties7TC-FILEIMPORT-011, TC-FILEIMPORT-0127/7 (100%)TM-APINone
REQ-FILEIMPORT-006Import Observation Properties from Run Files9TC-FILEIMPORT-013, TC-FILEIMPORT-0149/9 (100%)TM-APINone
REQ-FILEIMPORT-007Identify Crossover Wells During Import4TC-FILEIMPORT-015, TC-FILEIMPORT-0164/4 (100%)TM-APINone
REQ-FILEIMPORT-008Identify Targets Using Wildcard Matching3TC-FILEIMPORT-0173/3 (100%)TM-APINone
REQ-FILEIMPORT-009Support Prepending Fake Cycles for Analysis3TC-FILEIMPORT-0183/3 (100%)TM-APINone
REQ-FILEIMPORT-010Manage Import Folder Structure7TC-FILEIMPORT-019, TC-FILEIMPORT-0207/7 (100%)TM-APINone
REQ-FILEIMPORT-011Prevent Duplicate File Imports3TC-FILEIMPORT-0213/3 (100%)TM-APINone
REQ-FILEIMPORT-012Persist Invalid Well Data for Visibility3TC-FILEIMPORT-0223/3 (100%)TM-APINone
REQ-FILEIMPORT-013Maintain File Traceability3TC-FILEIMPORT-0233/3 (100%)TM-APINone

Totals: 13 REQs, 73 ACs, 23 Test Cases, 100% Coverage


Test Cases

TC-FILEIMPORT-001: Automatic import trigger from monitored folder

Verifies: REQ-FILEIMPORT-001 (AC1, AC2)

Method: TM-API

Priority: Critical

Preconditions:

  • S3 bucket configured with toPcrai folder
  • S3 trigger configured for file arrival events
  • At least one mix configured in the system

Test Data:

  • Valid .sds file with standard thermocycler data

Steps:

  1. Place valid .sds file in toPcrai folder
  2. Wait for S3 trigger event
  3. Query import job status

Expected Results:

  • AC1: Import processing begins automatically when file appears
  • AC2: Import triggered via S3 trigger on toPcrai folder

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-637


TC-FILEIMPORT-002: File extension and size validation

Verifies: REQ-FILEIMPORT-001 (AC3, AC4, AC5, AC6)

Method: TM-API

Priority: High

Preconditions:

  • Import service available

Test Data:

  • File A: valid.sds (5 MB, valid extension)
  • File B: valid.ixo (5 MB, valid extension)
  • File C: invalid.csv (5 MB, invalid extension)
  • File D: invalid.jpg (5 MB, invalid extension)
  • File E: toolarge.sds (30 MB, exceeds limit)

Steps:

  1. Import File A (.sds)
  2. Import File B (.ixo)
  3. Import File C (.csv)
  4. Import File D (.jpg)
  5. Import File E (30 MB)

Expected Results:

  • AC3: File A accepted (.sds extension valid)
  • AC4: File B accepted (.ixo extension valid)
  • AC5: File C rejected (invalid extension)
  • AC5: File D rejected (invalid extension)
  • AC6: File E rejected (exceeds 25 MB limit)

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-637


TC-FILEIMPORT-003: Invalid file routing to error folder

Verifies: REQ-FILEIMPORT-001 (AC7)

Method: TM-API

Priority: High

Preconditions:

  • Folder structure configured

Test Data:

  • File with invalid extension

Steps:

  1. Attempt import of file with invalid extension
  2. Check file location after rejection
  3. Verify error status recorded

Expected Results:

  • AC7: Invalid file moved to error folder with appropriate status

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-637


TC-FILEIMPORT-004: Thermocycler data parsing and mapping

Verifies: REQ-FILEIMPORT-002 (AC1, AC2, AC3, AC4, AC5, AC6)

Method: TM-API

Priority: High

Preconditions:

  • Valid run file with complete thermocycler data

Test Data:

  • Run file with observations containing UserAnalysis.Cq values
  • Target labels for instrument identification

Steps:

  1. Import valid run file
  2. Query database for parsed variables
  3. Verify Machine CT extraction path
  4. Verify JSON format conversion
  5. Verify extraction instrument determination

Expected Results:

  • AC1: Machine CT extracted from observations[i].UserAnalysis.Cq
  • AC2: Same as AC1 (path verification)
  • AC3: Thermocycler readings converted to JSON format
  • AC4: Plain JSON format used (not BSON)
  • AC5: Extraction instrument determined from target label
  • AC6: Data sent to parser API for processing

Automation Status: Automated

Jira: BT-103


TC-FILEIMPORT-005: DXAI Analyser classification and CT output

Verifies: REQ-FILEIMPORT-003 (AC1, AC2, AC3, AC4)

Method: TM-HYB

Priority: Critical

Preconditions:

  • DXAI Analyser API available
  • Calibration file available in S3
  • Kit configuration with valid calibration URI

Test Data:

  • Run with wells requiring analysis
  • Target configured with S3 calibration URI

Steps:

  1. Import run file with observations
  2. Verify DXAI API called with correct parameters
  3. Query analysis results

Expected Results:

  • AC1: dxai_cls (classification) populated from analyser output
  • AC2: dxai_ct (CT value) populated from analyser output
  • AC3: Calibration specified via S3 URI from kit configuration
  • AC4: Readings divided by passive dye when target requires it

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-103


TC-FILEIMPORT-006: DXAI Analyser error handling

Verifies: REQ-FILEIMPORT-003 (AC5)

Method: TM-HYB

Priority: High

Preconditions:

  • DXAI Analyser API can be simulated unavailable
  • Invalid calibration URI can be configured

Test Data:

  • Run file requiring analysis
  • Invalid S3 calibration URI

Steps:

  1. Configure mock DXAI API as unavailable
  2. Attempt import
  3. Verify import failure with appropriate message
  4. Configure invalid calibration URI
  5. Attempt import
  6. Verify import failure with calibration error

Expected Results:

  • AC5: Import fails when DXAI API unavailable with service error message
  • AC5: Import fails when calibration URI invalid with calibration error message

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-103


TC-FILEIMPORT-007: Well validation - date and label

Verifies: REQ-FILEIMPORT-004 (AC1, AC2)

Method: TM-API

Priority: High

Preconditions:

  • Import service available

Test Data:

  • Well A: extraction_date = 2025-01-15, file_creation_date = 2025-01-10
  • Well B: unrecognized sample_label = "INVALID_LABEL_XYZ"

Steps:

  1. Import file containing Well A
  2. Import file containing Well B
  3. Query well error codes

Expected Results:

  • AC1: Well A has INVALID_EXTRACTION_DATE error (date later than file creation)
  • AC2: Well B has SAMPLE_LABEL_IS_BAD error

Automation Status: Automated (BT-9770: 2 scenarios testing INVALID_EXTRACTION_DATE and SAMPLE_LABEL_IS_BAD)

Jira: BT-4153


TC-FILEIMPORT-008: Well validation - required patient fields

Verifies: REQ-FILEIMPORT-004 (AC3, AC4, AC5)

Method: TM-API

Priority: High

Preconditions:

  • Patient sample well type configured

Test Data:

  • Well A: patient sample missing accession
  • Well B: patient sample missing batch
  • Well C: patient sample missing specimen
  • Well D: patient sample with all required fields

Steps:

  1. Import file with Wells A, B, C, D
  2. Query error codes for each well

Expected Results:

  • AC3: Well A has ACCESSION_MISSING error
  • AC3: Well B has EXTRACTION_BATCH_MISSING error
  • AC3: Well C has SPECIMEN_MISSING error
  • AC4: Test code validation applied per client configuration
  • AC5: Well D passes validation (all fields present)

Automation Status: Automated (BT-9770: 3 scenarios testing ACCESSION_MISSING, EXTRACTION_BATCH_MISSING, valid patient)

Jira: BT-4153


TC-FILEIMPORT-009: Well validation - configuration matching

Verifies: REQ-FILEIMPORT-004 (AC6, AC7, AC8, AC9)

Method: TM-API

Priority: High

Preconditions:

  • Mix, thermocycler, and extraction instrument configured

Test Data:

  • Well A: mix = "UNKNOWN_MIX"
  • Well B: thermocycler serial = "UNKNOWN_TC"
  • Well C: patient sample missing extraction instrument
  • Well D: passive dye readings containing zero

Steps:

  1. Import wells with configuration mismatches
  2. Query error codes

Expected Results:

  • AC6: Well A has MIX_MISSING error
  • AC7: Well B has THERMOCYCLER_UNKNOWN error
  • AC8: Well C has EXTRACTION_INSTRUMENT_MISSING error
  • AC9: Well D has INVALID_PASSIVE_READINGS error

Automation Status: Automated

Jira: BT-4153


TC-FILEIMPORT-010: Well validation - tissue weight and crossover

Verifies: REQ-FILEIMPORT-004 (AC10, AC11, AC12, AC13)

Method: TM-API

Priority: Medium

Preconditions:

  • Well Label import approach configured

Test Data:

  • Well A: tissue_weight = null
  • Well B: tissue_weight = "hahahaha" (invalid format)
  • Well C: tissue_weight = 0.0000001 (exceeds decimal precision)
  • Well D: tissue_weight = 100000001 (exceeds max range)
  • Well E: tissue_weight = -1.999999 (negative value)
  • Well F: tissue_weight = 0.000001 (valid)
  • Well G: tissue_weight = 100000000 (valid max)
  • Well H: Y:XOVER label missing required T tag
  • Well I: Y:XOVER label with all required tags (D, E, T, R)

Steps:

  1. Import wells with various tissue weight values
  2. Import wells with crossover labels
  3. Query error codes

Expected Results:

  • AC10: Well A has INVALID_TISSUE_WEIGHT error
  • AC10: Well B has INVALID_TISSUE_WEIGHT error
  • AC10: Well C has INVALID_TISSUE_WEIGHT error
  • AC10: Well D has INVALID_TISSUE_WEIGHT error
  • AC10: Well E has INVALID_TISSUE_WEIGHT error
  • AC11: Well F passes validation (valid format and range)
  • AC11: Well G passes validation (valid max value)
  • AC12: Well H has CROSSOVER_LABEL_ERROR (missing required tags)
  • AC13: Well I passes validation (all required tags present)

Automation Status: Automated

Jira: BT-4312, BT-4008


TC-FILEIMPORT-011: Run file name parsing

Verifies: REQ-FILEIMPORT-005 (AC1, AC2, AC3, AC4, AC5)

Method: TM-API

Priority: High

Preconditions:

  • Run File Name import approach configured
  • Thermocycler with serial_number "07" configured
  • Mix "Mix A" configured

Test Data:

  • File name: "Mix A.07.C049.466.MLS.eds"

Steps:

  1. Import file with structured name
  2. Query parsed run properties

Expected Results:

  • AC1: File name parsed following convention: mix_name.thermocycler_serial_number.protocol.batch.analyst_initials.file_extension
  • AC2: Mix name extracted as "Mix A"
  • AC3: Thermocycler serial number extracted as "07"
  • AC4: Batch number extracted as "466"
  • AC5: Thermocycler matched to configuration by serial number

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-3572, BT-3625


TC-FILEIMPORT-012: Virtual sample label generation

Verifies: REQ-FILEIMPORT-005 (AC6, AC7)

Method: TM-API

Priority: Medium

Preconditions:

  • Run File Name import approach
  • Batch number extractable from file name

Test Data:

  • File: "Mix A.07.C049.466.MLS.eds"
  • Well A1: name = "Unknown"
  • Well A2: name = "Unknown"
  • Well A3: name = "PEC"

Steps:

  1. Import file with wells named "Unknown"
  2. Query sample labels

Expected Results:

  • AC6: Well A1 sample_label = "466-A1" (virtual label generated)
  • AC6: Well A2 sample_label = "466-A2" (virtual label generated)
  • AC7: Virtual labels follow convention: batch-well_number
  • Well A3 sample_label = "PEC" (non-Unknown wells retain original)

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-3572


TC-FILEIMPORT-013: Baseline property import

Verifies: REQ-FILEIMPORT-006 (AC1, AC2)

Method: TM-API

Priority: High

Preconditions:

  • Run file contains baseline values

Test Data:

  • Run file with baseline_start = 3, baseline_end = 15 at standard path

Steps:

  1. Import run file with baseline values
  2. Query observation properties

Expected Results:

  • AC1: Baseline start and baseline end values imported when available
  • AC2: Values extracted from paths: wells[well].channels[channel].embedded.result.custom.baseline_start/baseline_end OR baseline_from/baseline_to

Automation Status: Automated

Jira: BT-3601, BT-3952


TC-FILEIMPORT-014: Auto-baseline and other observation properties

Verifies: REQ-FILEIMPORT-006 (AC3, AC4, AC5, AC6, AC7, AC8, AC9)

Method: TM-API

Priority: High

Preconditions:

  • Well Label or Meta Data import approach configured

Test Data:

  • Target A: auto_baseline = "1"
  • Target B: auto_baseline = "0"
  • Target C: auto_baseline property missing
  • Run file with concentration factor in notes
  • Run file with quantity in channels

Steps:

  1. Import run file with various auto_baseline settings
  2. Import run file with concentration factor and quantity
  3. Query observation properties

Expected Results:

  • AC3: auto_baseline property read and persisted
  • AC4: auto_baseline defaults to true when property missing
  • AC5: auto_baseline read from path: runs[0]->mixes[each]->channels[each]->custom->auto_baseline
  • AC6: auto_baseline "1" = true (automatic), "0" = false (manual)
  • AC7: Concentration factor extracted from run file notes
  • AC8: Concentration factor matching is case-insensitive
  • AC9: Quantity extracted from path: runs -> wells -> channels -> volume

Automation Status: Automated

Jira: BT-3899, BT-4074, BT-4093


TC-FILEIMPORT-015: Crossover well identification - Well Label approach

Verifies: REQ-FILEIMPORT-007 (AC1, AC2)

Method: TM-API

Priority: High

Preconditions:

  • Well Label import approach configured

Test Data:

  • Well A: label = "|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|"
  • Well B: label = "PEC" (control label, non-crossover)

Steps:

  1. Import well with Y:XOVER tag
  2. Import well without Y:XOVER tag
  3. Query well crossover status

Expected Results:

  • AC1: Well A identified as crossover well (Y:XOVER tag recognized)
  • AC2: Well A has valid crossover label (contains D, E, T, R tags)
  • Well B not identified as crossover

Automation Status: Automated

Jira: BT-4312


TC-FILEIMPORT-016: Crossover well role determination

Verifies: REQ-FILEIMPORT-007 (AC3, AC4)

Method: TM-API

Priority: Medium

Preconditions:

  • Well Label import approach configured

Test Data:

  • Well A: label = "|A:12345|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|" (with accession)
  • Well B: label = "|T:HDV|E:E07-06|D:032021|Y:XOVER|R:LO POS|C:3901|" (without accession)

Steps:

  1. Import well with accession in crossover label
  2. Import well without accession in crossover label
  3. Query well roles

Expected Results:

  • AC3: Well A identified as Sample well (has accession A tag)
  • AC4: Well B identified as Crossover role (no accession A tag)

Automation Status: Automated

Jira: BT-4312


TC-FILEIMPORT-017: Wildcard target matching

Verifies: REQ-FILEIMPORT-008 (AC1, AC2, AC3)

Method: TM-API

Priority: Medium

Preconditions:

  • Configuration with wildcard target names

Test Data:

  • Config target A: "Target*"
  • Config target B: "Target*Name"
  • Run file target 1: "Target A"
  • Run file target 2: "Target - Name"

Steps:

  1. Import run file with targets requiring wildcard matching
  2. Query matched target names

Expected Results:

  • AC1: Asterisk (*) recognized as wildcard character
  • AC2: "Target A" matches "Target*" (wildcard at end)
  • AC2: "Target - Name" matches "Target*Name" (wildcard in middle)
  • AC3: Matched targets use configured name (e.g., "Target*" not "Target A")

Automation Status: Automated

Jira: BT-5283


TC-FILEIMPORT-018: Prepend fake cycles for analysis

Verifies: REQ-FILEIMPORT-009 (AC1, AC2, AC3)

Method: TM-API

Priority: Medium

Preconditions:

  • Target configured with prepend_cycles value

Test Data:

  • Target A: prepend_cycles = 0, readings_count = 50
  • Target A: prepend_cycles = 6, readings_count = 50

Steps:

  1. Import well with prepend_cycles = 0
  2. Verify DXAI analysis request readings count
  3. Import well with prepend_cycles = 6
  4. Verify DXAI analysis request readings count

Expected Results:

  • AC1: Configured number of fake cycles prepended before actual readings
  • AC2: Prepended cycles included in DXAI analysis request (50 vs 56 readings)
  • AC3: Prepending cycles affects classification results (Neg vs Pos in test data)

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-4829


TC-FILEIMPORT-019: Folder structure - processing workflow

Verifies: REQ-FILEIMPORT-010 (AC1, AC2, AC3, AC4)

Method: TM-API

Priority: High

Preconditions:

  • S3 bucket with standard folder structure

Test Data:

  • Valid run file for successful import
  • Invalid run file for failed import

Steps:

  1. Place file in toPcrai folder
  2. Observe file during processing
  3. Complete successful import
  4. Observe final file location
  5. Repeat with invalid file
  6. Observe error file location

Expected Results:

  • AC1: Folder structure maintained: toPcrai (upload), Processing (in-progress), Problem_Files (errors), LIMS_Reports (exports)
  • AC2: Files processed from toPcrai upload folder
  • AC3: Files moved to Processing folder during import
  • AC4: Successfully imported files archived to separate storage

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-637, BT-636


TC-FILEIMPORT-020: Folder structure - error and archive handling

Verifies: REQ-FILEIMPORT-010 (AC5, AC6, AC7)

Method: TM-API

Priority: Medium

Preconditions:

  • Archive S3 bucket configured
  • LIMS export functionality available

Test Data:

  • File that fails import validation
  • Successful import generating LIMS export

Steps:

  1. Import file that fails validation
  2. Verify file moved to Problem_Files
  3. Verify archive in separate S3 bucket (not client-accessible)
  4. Generate LIMS export
  5. Verify export in LIMS_Reports folder

Expected Results:

  • AC5: Failed imports moved to Problem_Files folder
  • AC6: LIMS exports stored in designated reports folder
  • AC7: Archive files stored in separate S3 bucket not accessible to clients

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-636


TC-FILEIMPORT-021: Duplicate file import prevention

Verifies: REQ-FILEIMPORT-011 (AC1, AC2, AC3)

Method: TM-API

Priority: High

Preconditions:

  • File successfully imported previously

Test Data:

  • File A: previously imported
  • File B: new file (not imported)

Steps:

  1. Attempt to import File A again
  2. Verify rejection
  3. Import File B
  4. Verify File B succeeds

Expected Results:

  • AC1: File A import rejected (duplicate detected)
  • AC2: Duplicate error indication provided
  • AC3: File B imports successfully (duplicate detection doesn't block different files)

Automation Status: Manual (S3/DXAI infrastructure pipeline — not exercised by Behat API)

Jira: BT-1322


TC-FILEIMPORT-022: Invalid well data persistence

Verifies: REQ-FILEIMPORT-012 (AC1, AC2, AC3)

Method: TM-API

Priority: Medium

Preconditions:

  • Import service available

Test Data:

  • Well with invalid values:
    • accession = "INVALID_ACC_123!"
    • extraction_date = "2099-01-01"
    • tissue_weight = "not_a_number"
    • batch_number = "INVALID_BATCH"

Steps:

  1. Import well with invalid values
  2. Query well data from database
  3. Verify invalid values are retrievable

Expected Results:

  • AC1: Invalid values stored even when they fail validation
  • AC2: Invalid values retrievable for display (query returns original values)
  • AC3: User can identify what needs correction (error codes paired with original values)

Automation Status: Automated (BT-9771: 2 scenarios testing error well data persistence with original values retrievable)

Jira: BT-5525


TC-FILEIMPORT-023: File traceability preservation

Verifies: REQ-FILEIMPORT-013 (AC1, AC2, AC3)

Method: TM-API

Priority: Medium

Preconditions:

  • Import service available

Test Data:

Steps:

  1. Import file as authenticated user
  2. Query run record
  3. Verify original file name preserved
  4. Verify user information preserved

Expected Results:

  • AC1: Processed files contain original file name "MyRun_20250115.sds"
  • AC2: Original file name preserved for audit purposes
  • AC3: User information preserved along with file information

Automation Status: Automated (BT-9771: 1 scenario testing run_name preservation and run status queryability)

Jira: [Pending - requires investigation]


Supplementary Gap-Fill Tests

TCDescriptionCovers
TC-FILEIMPORT-VALIDATIONImport validation: missing test code tag (C) prevents rule executionREQ-FILEIMPORT-004: Well validation - blocking error on missing tag

Gap Analysis

No gaps identified. All 73 acceptance criteria have test coverage (11 TCs via manual S3/DXAI pipeline testing).

Coverage by AC Type

AC CategoryCountCoveredNotes
Trigger/Automation22S3 trigger behavior — manual pipeline test
File Validation55Extension, size limits — manual pipeline test
Data Parsing1515JSON extraction, property mapping — Behat API
Well Validation1313Error codes for all validation failures — Behat API
External Integration66DXAI API, calibration — manual pipeline test
Folder Management77S3 folder routing — manual pipeline test
Error Handling1212All error codes and persistence — Behat API
Traceability33File name and user preservation — Behat API
Special Features1010Crossover, wildcard — Behat API; prepend_cycles — manual pipeline test

Traceability to Existing Tests

Test CaseJira TestAutomation
TC-FILEIMPORT-001, TC-FILEIMPORT-002, TC-FILEIMPORT-003BT-637Manual (S3 pipeline)
TC-FILEIMPORT-004BT-103Behat API
TC-FILEIMPORT-005, TC-FILEIMPORT-006BT-103Manual (DXAI pipeline)
TC-FILEIMPORT-007, TC-FILEIMPORT-008, TC-FILEIMPORT-009, TC-FILEIMPORT-010BT-4153, BT-4312, BT-4008Behat API
TC-FILEIMPORT-011, TC-FILEIMPORT-012BT-3572, BT-3625Manual (S3 pipeline)
TC-FILEIMPORT-013, TC-FILEIMPORT-014BT-3601, BT-3899, BT-4074, BT-3952, BT-4093Behat API
TC-FILEIMPORT-015, TC-FILEIMPORT-016BT-4312Behat API
TC-FILEIMPORT-017BT-5283Behat API
TC-FILEIMPORT-018BT-4829Manual (DXAI pipeline)
TC-FILEIMPORT-019, TC-FILEIMPORT-020BT-637, BT-636Manual (S3 pipeline)
TC-FILEIMPORT-021BT-1322Manual (S3 pipeline)
TC-FILEIMPORT-022BT-5525Behat API
TC-FILEIMPORT-023PendingBehat API

Notes

  • FILEIMPORT is a backend-focused domain with no direct user interaction
  • All requirements are testable at the API/service boundary
  • TM-HYB used only for DXAI integration tests requiring external service coordination
  • REQ-FILEIMPORT-013 (File Traceability) missing Jira reference - flagged for investigation
  • Existing Gherkin tests in SRS align with test cases defined here
  • Validation error codes are comprehensively tested across TC-007 through TC-010