Skip to main content
Version: 3.0.1

SDD Algorithms

Overview

This document contains algorithm descriptions, process flows, and rule execution logic extracted from the System Design Specifications.

Running Rules Architecture

Running rules

Currently each rules is able to call SQL directly to determine applicability and collect metadata needed for the execution thereof. This is potentially causing bottlenecks and an alternative approach is proposed below.

image9

image copy of diagram

image10

Run Import Process

Run Import
The process of importing a run file into PCR.AI.

A run file typically contains data generated from laboratory instruments during a specific run. Importing this file into a PCR.AI system allows for efficient storage, organization, and management of the data within the PCR.AI system. This helps store and organize the data so that scientists can easily find and analyze it. It also allows them to connect the data to specific samples and make reports based on the information.

image13

In the diagram above, the "Run File" represents the file containing data from a lab experiment that needs to be imported into the PCR.AI system. The arrows show the process of importing the run file into the system, where the data will be stored and managed.

High Level Use Case Diagram

image14

Use Case Action Explanations

Verify format (.sds, .pcrd, etc.)

Validate Run file

Run file validation will be handled by the Client Application

Allowed user types

Any user can upload run files except for CLIENT_ADMIN and MANAGER types of users

Allowed File types
  • .sds
  • .ixo
  • .pcrd
  • .eds

Only the types mentioned above are allowed.

Mechanism behind the validation

The validation mechanism checks each file to determine if it belongs to the allowed file types. It then separates the files into two categories: valid files and invalid files.

Error Handling
  • If uploaded by CLIENT_ADMIN or MANAGER
  1. Notify user with error message
  2. Stop executing further
  • If there are no valid files,

Notify user with error message

  • If some files are invalid and some are valid,

Notify user with warning message

Proceed after validation

The valid files will be passed to another module in the client application for uploading via vapor.

Parse Run File using Parser API

The server application's module will take care of this task, which involves several steps.

Steps
1. Gathering all files
  • If use multiple site feature is disabled, then the system will move all files from the root directory into the toPcrai folder in default site.
    If If use multiple site feature is enable, then system will use toPcrai folder in related site
  • Then it takes all files within toPcrai folder of all sites.
  • The system will assign a unique name to each file and then move that file into the parsing folder. Then change the import progress state to CONVERTING.
  • Finally, the system will call the module that is responsible for handing over each file to the Parser.
2. Module which handovers files to parser
  • This module will search for the first converting file. If no file is found, it will stop the execution.
  • That file will be handed over to the Parser module.
  • After the Parser completes its task with the given file, this module will recursively invoke itself.
3. Parse runfile
  • Firstly, the parser will verify if the given file has already been imported. If it has, the file will be moved to a * problemfiles * folder, and the import state will be changed to DUPLICATE Following this, the execution will be halted.

  • If no duplication is detected, the file will be sent to an external API. The API's response will be an object containing the raw data of the given runfile. In the event that the external API encounters an error with the runfile, the file will be moved to the * problemfiles * folder and the import state will be changed to PARSE_ERROR. Following this, the execution will be halted.

  • If everything proceeds smoothly without any interruptions, the Parser will commence the conversion of the response object into a serialized .json file.

image15

Curve Alignment Algorithm

Overview

Curve alignment is a baseline normalization technique for real-time PCR (qPCR) fluorescence curves. It standardizes curves for comparison across multiple samples or controls.

Scientific Purpose

  1. Eliminates Background Noise: Early PCR cycles (3-5) represent background fluorescence from instrument baseline, reagent fluorescence, non-specific binding, and environmental factors.
  2. Standardizes Comparison: Aligned curves enable meaningful comparison of amplification efficiency, CT values, and curve shapes.
  3. Enables Quality Analysis: Baseline normalization is crucial for accurate CT determination in comparative analysis.

Baseline Calculation Formula

The system calculates normalized fluorescence using a 3-cycle baseline average:

baseline_average = (reading[3] + reading[4] + reading[5]) / 3
normalized_reading[i] = reading[i] - baseline_average

Implementation:

getAverageSeries(readings) {
let average = (readings[3] + readings[4] + readings[5]) / 3
return readings.map(reading => reading - average)
}

Alignment Activation Rules

Alignment is automatically disabled when only one series is shown:

ConditionAlignment StateY-Axis Label
Single curve displayedDisabled (grayed out)"Fluorescence"
Multiple curves displayedAvailable"Normalized fluorescence" (when enabled)

Component Implementation

ComponentAlignment Behavior
ObservationModal.vueMultiple curve options: curveCurrent (disabled), curveControls, curvesWell, curvesTarget
RunPlateMapObservationModal.vueEnabled when multiple wells selected
WellObservationsModal.vueEnabled when multiple target observations exist
Print viewsAlways disabled - shows raw fluorescence for lab records

Data Flow

User toggles checkbox


shouldDisableAlignCurves() check


[If enabled] getAverageSeries() transformation


Chart renders with normalized data
Y-axis label updated to "Normalized fluorescence"

Design Notes

  • Baseline cycles (3, 4, 5) are currently hardcoded
  • Single curve analysis uses raw fluorescence (no benefit from normalization)
  • Print views always show raw fluorescence for consistency with lab records
  • User receives tooltip feedback when alignment is unavailable

Run Analysis

Run Analysis

In PCR.AI system, "run file analysis" typically refers to the process of analyzing the data generated from a specific run. It involves processing and interpreting the raw data obtained from instruments conducted in the laboratory. This analysis step helps extract meaningful insights and information from the collected data, such as identifying patterns, calculating metrics, generating reports, or comparing results against established standards and rules. The results of the analysis are often used for quality control, decision-making, and reporting purposes in a laboratory

High Level Use Case Diagram

image16

Configuration Upload Algorithm

Configuration Upload

The configuration import process in a PCR.AI system allows for the seamless integration of data from an Excel file. This process plays a crucial role in ensuring that the PCR.AI system aligns with the specific requirements of the client.

When initiating the configuration import, the PCR.AI system accesses the Excel file, which serves as a repository for the client's configuration data. This data includes various parameters, settings, and rules that dictate how the PCR.AI system functions for the client's specific needs.

How Kit configuration upload and works
Validate Configuration File

While uploading, the file system will validate the Excel file with following conditions:

  • The system first checks the file format to ensure compatibility and prevent processing errors. It verifies that the file adheres to the expected formats (.xls, .xlsx.)
  • Once the file format is validated, the system verifies the sheet name(s) within the file. This step ensures that the required sheet(s) are present and correctly named, preventing any discrepancies during data processing.
  • The system examines the data types within each sheet, verifying that they match the expected format. This validation prevents any potential data inconsistencies or errors caused by incorrect data types, such as using text instead of numbers or vice versa, Data must match according to requirements.
  • During the import process, each row or group of rows is checked for validation. If the validation fails, the system will mark them with an "Ignored" status. However, if the validation is successful, the system will assign an "Imported" state to the Excel file
  • As a result of this process, the database will be updated
High Level Work Flow Diagram

image17

Configuration Export Algorithm

Configuration Export

The configuration export process in a PCR.AI system allows for the extraction of configuration data from the system into an Excel file. This process plays a crucial role in capturing and documenting the current configuration settings and parameters of the PCR.AI system.

When initiating the configuration export, the PCR.AI system collects the relevant configuration data, which includes various components and settings that have been customized within the system.

How Kit configuration export and works
Collecting Configuration Selections
  • The system will gather the configurations the user has selected for export, along with predefined configurations (Dyes, Roles, Rules, WestgardEvents, and Specimens).
  • This ensures that essential configurations are included in the export, regardless of user selections.
Generating the Configuration File
  • The system will send a request to the API to generate the configuration file.
  • The API will retrieve the required data for each configuration category from the database.
  • The raw data will be processed and converted into a readable format for inclusion in the export file.
Setting the File Name and Format
  • The generated file will be named 'kit-configurations-<logged_in_site_name>.xlsx', where "<logged_in_site_name>" represents the name of the logged-in site.
  • The file format will be .xlsx (Microsoft Excel format).
Selecting the File Save Location
  • A file browsing dialog will appear, allowing the user to choose the location where they want to save the exported file.
  • Browse through your computer's files and folders to select the desired location.
  • Once the user has chosen the location, click the "Save" button to save the file.
High Level Flow Chart Diagram

image18

Combined Outcomes Algorithm

Combined Outcomes Configurations
Usage of Combined Outcomes

The user has the ability to determine the outcome based on the combination of mixes and how well these combinations align with the targets. Depending on the results of these combinations, the user can assign different outcome

How Combine outcomes works

When the run file is uploaded, the system will execute the COMBINED_OUTCOME rule to compare well results with each Combined Outcome and determine the appropriate Outcome for the well.

Combimed Outcomes Create Outcomes from CLS results

The different configured targets' CLS combinations should give the specific outcomes

image21

Import, open the run file with different CLS combinations

image22

The COMBINED OUTCOMES rule is triggered. The outcomes are based on the configured CLS combination.

Error Code Generation System

The error code system categorizes and assigns error codes during run analysis to enable auditing, debugging, and quality control workflows.

Error Code Architecture

Error codes are configurable database entities managed through Kit Configuration (REQ-KITCFG-005). The system distinguishes between:

CategoryDescriptionExamples
Seed CodesMust exist in configuration - rule implementations are hardcoded to output themWG12S_HIGH_WELL, FAILED_POS_TARGET, UNKNOWN_MIX
Custom CodesAdded by clients for Combined Outcomes and ResolutionsClient-specific codes for custom workflows

Error Code Entity Properties (per REQ-KITCFG-005):

  • code: Unique identifier (e.g., "WG12S_HIGH_WELL")
  • message: Human-readable description
  • type: Categorization for filtering/reporting
  • affects: Assignment level (well or target)
  • prevents_analyse: Whether error blocks further processing
  • lims_export: Whether included in LIMS export
  • control_error, westgard_error, is_inhibited: Classification flags

Error Code Taxonomy

Error codes are categorized by three dimensions:

DimensionValuesDescription
Scope LevelWELL, TARGET, MIXGranularity at which the error applies
CategoryAnalytical, BlockingWhether error is QC violation or prevents processing
DirectionHIGH, LOW, (none)For directional violations (e.g., CT too high)

Naming Convention: {CONDITION}_{DIRECTION}_{LEVEL}

  • Example: WG12S_HIGH_WELL = Westgard 1:2s rule, high direction, well level

Blocking vs Analytical Errors

Blocking Errors halt processing at specific stages:

StageExample ErrorsBehavior
ParsingINVALID_PASSIVE_READINGS, THERMOCYCLER_UNKNOWN, INVALID_FILENAMEFile rejected, no wells created
AnalysisSAMPLE_LABEL_IS_BAD, UNKNOWN_MIX, INVALID_EXTRACTION_DATEWell marked as errored, no further analysis

Analytical Errors flag QC violations but allow processing to continue:

  • Westgard rule violations (WG12S, WG13S, WG22S, WG7T)
  • Control validation failures (BCC, BNC, BPEC)
  • Inhibition detection (INH, IC_FAILED)

Error Code Generation by Rule Category

Rule CategoryGenerated CodesTrigger Condition
WG12S (1:2s)WG12S_HIGH_WELL, WG12S_HIGH_TARGET, WG12S_LOW_WELL, WG12S_LOW_TARGETCT > 2 SD from mean
WG13S (1:3s)WG13S_HIGH_, WG13S_LOW_CT > 3 SD from mean
WG22S (2:2s)WG22S_HIGH_, WG22S_LOW_2 consecutive CTs > 2 SD same direction
WG7T (7T)WG7T_HIGH_, WG7T_LOW_7 consecutive CTs trending same direction
BCCFAILED_POS_WELL, FAILED_POS_TARGET, CONTROL_OUT_OF_RANGE_*Control CT outside configured limits
INHINH_WELL, IC_FAILEDInternal control indicates inhibition

Sample Label Validation

Sample labels must match pattern: |{tag}:{value}|

Valid TagsDescription
A, C, D, E, T, R, N, X, Y, WApplication-defined tag codes

Invalid labels generate SAMPLE_LABEL_IS_BAD blocking error.

Error Propagation

Errors propagate based on scope:

  1. Well-level errors: Affect entire well
  2. Target-level errors: Affect specific target within well
  3. Mix-level errors: Affect specific mix within target

Error codes are stored in the errors table and linked to their respective scope (well_id, target_id, or mix_id).

Error Resolution Workflow

Error codes integrate with the Error Resolution system (REQ-KITCFG-006) to enable user-driven resolution of QC errors:

1. ERROR GENERATED
Rule outputs error code → assigned to well/target/mix


2. ERROR DISPLAYED
Run File Report shows error code and message
User reviews affected wells


3. RESOLUTION APPLIED (optional)
User selects configured resolution action
System applies: outcome change, rule skips, affected wells


4. RE-ANALYSIS (if configured)
System re-runs rules, skipping those specified in resolution
Example: "RPT" outcome skips most rules since repeat testing required

Resolution Configuration (per REQ-KITCFG-006):

  • resolution_message: Text describing the action taken
  • affected_wells: Which wells receive resolution (by LIMS status, error code, or "All")
  • rules_skipped: Rules to exclude during re-analysis
  • outcome_on_resolution: LIMS status to apply after resolution
  • resolution_level: Scope - Well, All Observations, or Discrepant Observations only

Processing Control Flow

The prevents_analyse flag on each error code determines analysis flow (REQ-RULES-ENGINE-003):

Rule executes → outputs error code

┌─────────┴─────────┐
│ │
prevents_analyse=true prevents_analyse=false
│ │
Analysis HALTS Continue to next rule
Return error code Accumulate warnings

Wells with blocking errors cannot:

  1. Have subsequent rules executed
  2. Be included in Westgard history calculations
  3. Be exported to LIMS

Related SRS Requirements:

  • REQ-ERRORCODES-001 through REQ-ERRORCODES-011 (error code generation)
  • REQ-KITCFG-005 (error code configuration)
  • REQ-KITCFG-006 (error resolutions)
  • REQ-RULES-ENGINE-003 (error/warning flow control)

See: sdd-configuration.md for error code and resolution configuration schemas.

Run Status Calculation

Run Status is the aggregate readiness state displayed on the Runfile List, calculated from the collective state of all patient wells within a run.

Status Determination Logic

INPUT: All wells in run where role = PATIENT


┌───────────────────────────────────────┐
│ Count wells by state: │
│ - total_patient_wells │
│ - wells_with_errors │
│ - wells_exportable (no errors) │
│ - wells_exported │
└───────────────┬───────────────────────┘

┌───────┴───────┐
▼ ▼
[All exported?] [Any errors?]
│ │
┌───┴───┐ ┌───┴───┐
YES NO YES NO
│ │ │ │
▼ │ ▼ ▼
"All wells │ [Any "All wells
exported" │ exportable?] ready for
│ │ export"
│ ┌───┴───┐
│ YES NO
│ │ │
│ ▼ ▼
│ "Some "No export -
│ wells errors to
│ ready resolve"
│ for
│ export
│ with
│ errors
│ to
│ resolve"

└──[Check Westgard]


[Westgard error
unresolved?]

┌───┴───┐
YES NO
│ │
▼ │
"Reanalysis │
required" │

└──→ (use status
from above)

Status Values

StatusConditionExport Allowed
All wells ready for exportAll patient wells have no errorsYes (all)
Some wells ready for export with errors to resolve≥1 error AND ≥1 exportableYes (partial)
No export - errors to resolve≥1 error AND 0 exportableNo
All wells exportedAll patient wells have been exportedN/A (complete)
Reanalysis requiredUnresolved Westgard error affects runNo

Calculation Rules

  1. Scope: Only patient wells are counted; control wells are excluded from status calculation
  2. Error definition: Well has ≥1 error code where prevents_analyse = true OR lims_export = false
  3. Exportable definition: Well has no blocking errors AND has not been exported
  4. Westgard override: Unresolved Westgard errors on controls propagate to run status regardless of patient well states

Status Transitions

[New Run Imported]


"All wells ready for export" ←───────────────┐
│ │
├──[error detected]──→ "Some/No │
│ export..." │
│ │ │
│ [resolve error]──┘

├──[export wells]──→ "Some wells ready..."
│ │
│ [export remaining]
│ │
└──[export all]────→ "All wells exported"

Implementation Notes

  • Status is recalculated on: run import, error resolution, well export, reanalysis
  • Status is cached in runs.status column for list display performance
  • Westgard status check queries control wells for the same mix/target within configurable history window

Related SRS Requirements:

  • REQ-RUNFILE-008 (Display Run Status)

See: Error Code Generation System (above) for error code definitions affecting status.

Complex Functionality - Rules Engine

Complex functionality
Running Rules.
Rules Introduction

Rules in PCR.AI are typically defined by laboratory personnel or system administrators. These rules establish specific conditions, constraints, or requirements that the data must meet. Rules can be created based on regulatory guidelines, internal quality control procedures, or specific protocols.

Rule application

Once defined, the rules are applied to the relevant data within the PCR.AI system. This can include data imported from instruments (from run files), The PCR.AI system evaluates the data against the defined rules to determine if the data meets the specified criteria.

Data validation

The application of rules involves data validation. The PCR.AI system checks whether the data satisfies the defined rules or if any exceptions or violations are present.

Error detection and notification

If any data fails to meet the defined rules, the PCR.AI system identifies the errors or inconsistencies. Notifications or alerts can be generated to inform laboratory personnel about the issues.These notifications will be sent to responsible users for data review or correction.

Action and resolution

Upon receiving the error notifications, laboratory personnel can take appropriate actions to address the identified issues. This may involve Re-extract this well, Re-amplify this sample, Mark as TNP, or Exclude this well from being exported and etc.

Reporting and compliance

Rules in PCR.AI help ensure compliance with regulatory requirements and internal quality control standards. By enforcing rules, the PCR.AI system generates accurate and reliable data, which can be utilized for reporting purposes, audits, or other compliance-related activities.

Flow Chart Diagram

image23

Westgard Rules

Rule Example : Westgard Rules
Simple Westgard rules

These rules look at the CT (or quantity) of the control being measured and check that it is within certain ranges. The following rules are available in PCR.AI:

1:2S rule

This is triggered when a control is more than two SDs from the mean. For example, say you have mean of 30 and SD of 1. Any control having a CT of less than 28 (M-2SD = 30-(21) = 28) or more than 32 (M+2SD = 30 + (21) = 32) will trigger this rule.

image24

(all images: Wikipedia)

1:3S rule

This is triggered when a control is more than three SDs from the mean. For example, say you have mean of 30 and SD of 1. Any control having a CT of less than 27 or more than 33 will trigger this rule.

image25

Westgard rules looking at history

These rules look not only at the current control's CT (quantity) but also at recent controls' results to check that there is not an indication of something unexpected occurring.

2:2S rule

This rule is triggered if both the current control has triggered 1:2S and the previous control has triggered 1:2S or 1:3S etc (either both having high CT/quantity or both having low).

Example: Mean is 30, SD is 1.

Current control SD is 32.1 and previous was 33.2 (ie both were more than +2SD from mean): triggers

image26

  • Current control SD is 27.1 and previous was 26 (ie both were more than -2SD from mean): triggers
  • Current control SD is 32.1 and previous was 26 – does not trigger
7T rule (from westgard.com )

When seven consecutive control measurements trend in the same direction, i.e., get progressively higher or progressively lower.

image27

Westgard in error

If a Westgard error has been triggered (note some labs do not trigger errors for all of the above eg most will give warning on 1:2S or perhaps not report at all) it means that the testing process itself needs a check by the lab. Therefore any runs that are uploaded having the same mix and control type as a prior control that has received an error (that has not been 'resolved' by the user using manage results) will 'inherit' an error and results will not export.

This is important to avoid user error and especially in cases where the lab is automatically importing runs to PCR.AI and automatically receiving export reports.

In some cases, the laboratory may opt for this to be a warning and not an error.

Rule Mapping Architecture

Understanding How Codes Work and Architecture
The Magic Behind Rule Mapping Execution
  1. Client Configuration - Rule Mapping
  2. Decide Whether an Observation [Should / Should Not] Execute Through a Rule
  3. Map Configured Rule Name with a Php Class
1. Client Configuration for Rule Mapping

Rule Mapping is exposed to the User to Configure through the application. User can define which Rule should trigger for which Target + Role + Specimen Combination of Observations.

Properties of Rule:

  • Programmatic Rule Name
  • Precedence Order
  • Type
  • Is Allowed Error Wells

Properties of Rule Mappings:

  • Rule
  • Role
  • Target
  • Specimen (optional)
2. Decide Whether an Observation [Should / Should Not] Execute Through a Rule

An Observation has following properties which is used to map against rule

  • Role
  • Target
  • Specimen

Main Condition:

The most basic condition is used to trigger the execution of a rule against an observation is matching the above mentioned properties of an Observation and a Rule.

Example 1

Given

Observation A
Target: Target A
Role: Role A
Specimen: Specimen A

Observation B
Target: Target B
Role: Role B
Specimen: Specimen B

Rule A
Target: Target A
Role: Role A
Specimen: Specimen A

Rule B
Target: Target B
Role: Role B
Specimen: Specimen B

Then

Observation A will trigger Rule A but not Rule B
Observation B will trigger Rule B but not Rule A

Additional Conditions:

  • Well which has errors does not trigger any rules, unless
    • The Error, does not prevent analyze
    • The Rule, accepts error wells
  • The Well has 'SKIP' resolution code, does not trigger any rules, unless
    • The Rule type is 'Reanalysis'
3. Map Configured Rule Name with a Php Class

Once the decision is made that the observation should executed through a particular rule, the application find the related Php class that represent the rule. then the observation and the other required properties are passed to the rule class and execute.

How the Related Php class is identified for a particular Rule:

  • A Rule configured through the Kit Configuration has a property called programmatic_rule_name.
  • A list of so called Programmatic Rules as Php classes are Written under the namespace of App\Analyzer\Rules (inside the folder structure of app/Analyzer/Rules/)

The analyzer convert the rule programmatic_rule_name to Upper Camel Case and prefix with the rule namespace, and suffix with 'Rule', and try to find a matching class in the Programmatic Rules List for the fully qualified class name.

$analyzerRuleName = 'App\Analyzer\Rules\'.Str::studly(Str::title($this->programmatic_rule_name)).'Rule';

Conversion Example

Configuration Rule - Programmatic Rule NameFully Qualified Php Class Name
THRESHOLDApp\Analyzer\Rules\ThresholdRule
WFINALCLSApp\Analyzer\Rules\WfinalclsRule
WG14SApp\Analyzer\Rules\Wg14SRule
PICQUAL_SERUMApp\Analyzer\Rules\PicqualSerumRule
AMBApp\Analyzer\Rules\AmbRule
Convert to standard JSON format

After the external Parser API returns the raw data object, the system converts it to a standardized JSON structure for internal processing.

Target JSON Schema:

{
"run_metadata": {
"filename": "string",
"import_id": "uuid",
"thermocycler_model": "string",
"thermocycler_instrument": "string",
"run_date": "datetime",
"operator": "string",
"protocol": "string"
},
"wells": [
{
"position": "A1",
"sample_label": "string",
"sample_id": "string",
"observations": [
{
"target": "string",
"dye": "string",
"ct": "number|null",
"quantity": "number|null",
"fluorescence_curve": [/* array of RFU values per cycle */],
"baseline_start": "number",
"baseline_end": "number",
"threshold": "number"
}
]
}
]
}

Conversion Steps:

  1. Map Parser API response fields to internal schema
  2. Normalize well positions to standard format (A1-H12 for 96-well)
  3. Parse fluorescence curve data into numeric arrays
  4. Extract thermocycler metadata from file headers
  5. Generate unique import ID for tracking
  6. Serialize to JSON file in processing folder

Related Requirement: REQ-FILEIMPORT-002


Analyse Run File

After JSON conversion, the system performs analysis through the DXAI Calibrator API and internal rules engine.

Analysis Pipeline:

  1. Mix Matching

    • Match well targets/dyes to configured mixes
    • Create "Unknown" mix entries for unmatched combinations
    • Assign role based on Control Labels configuration
  2. DXAI Calibrator Integration

    • Send observation fluorescence curves to DXAI Calibrator API
    • Receive classification parameters (DF, RFU analysis)
    • Store calibrator response for each observation
  3. Rules Execution

    • Execute rules in precedence order by type
    • For each observation, evaluate rule mappings (role + target + mix + specimen)
    • Skip wells with preventing errors unless rule allows error wells
    • Accumulate errors, warnings, and outcomes
  4. Outcome Determination

    • Apply Combined Outcomes rules based on CLS combinations
    • Determine final well status (Valid, Error, Warning)
    • Calculate quantities using QIR settings where applicable
  5. Westgard Evaluation

    • For control wells, evaluate against Westgard limits
    • Check historical controls for trend rules (2:2S, 7T)
    • Propagate Westgard errors to subsequent runs if unresolved

State Transitions:

  • CONVERTINGANALYSINGCOMPLETE (success)
  • CONVERTINGANALYSINGANALYSIS_ERROR (failure)

Related Requirements:


Store Run File

After successful analysis, run data is persisted to the database and file storage.

Database Storage (Aurora MySQL):

EntityStorage LocationKey Data
Runruns tableMetadata, status, timestamps, thermocycler info
Wellwells tablePosition, sample info, role, final status
Observationobservations tableTarget, CT, quantity, CLS, errors
Curve Datafluorescence_curves tableRaw RFU values per cycle
Audit Logaudit_logs tableImport events, state changes

File Storage (S3):

ContentS3 PathPurpose
Original runfile{site}/archived/{run_id}/{original_filename}Audit trail, re-import
Parsed JSON{site}/processed/{run_id}/parsed.jsonDebugging, reprocessing
Problem files{site}/problem_files/{filename}Failed imports for review

Storage Workflow:

  1. Begin database transaction
  2. Insert run record with STORING status
  3. Insert wells and observations
  4. Store fluorescence curve data
  5. Upload original file to S3 archived folder
  6. Update run status to COMPLETE
  7. Commit transaction
  8. Remove file from processing folder

Error Handling:

  • Transaction rollback on any storage failure
  • File moved to problem_files with STORAGE_ERROR status
  • Notification sent to configured alert recipients

Related Requirement: REQ-FILEIMPORT-010


API functionality

The system exposes and consumes several APIs for internal processing and external integrations.

Internal APIs (Laravel/Vapor):

Endpoint CategoryPurposeAuth
/api/runsRun CRUD, status queries, reanalysis triggersCognito JWT
/api/wellsWell data, observations, error managementCognito JWT
/api/kit-config/*Configuration CRUD (mixes, rules, controls, etc.)Cognito JWT + Super Admin
/api/reportsReport generation, export triggersCognito JWT
/api/importManual upload handling, import statusCognito JWT
/api/exportLIMS export, configuration exportCognito JWT

External API Integrations:

ServicePurposeProtocol
Parser APIConvert thermocycler files (.sds, .pcrd, .eds, .ixo) to raw dataREST, API key auth
DXAI Calibrator APICurve classification, DF calculation, observation parametersREST, API key auth
PusherReal-time notifications (import progress, analysis status)WebSocket
SendGridEmail notifications (alerts, reports)REST, API key
SentryError tracking and session replaySDK integration

API Authentication:

  • User-facing APIs: AWS Cognito JWT tokens (regular + Super Admin pools)
  • External service APIs: API keys stored in Vapor secrets
  • Inter-service: IAM roles for AWS service-to-service calls

Related Requirements:


Dependencies

Runtime Dependencies:

CategoryTechnologyVersionPurpose
FrameworkLaravel9.x+Application framework
DeploymentLaravel VaporLatestServerless deployment to AWS Lambda
DatabaseAurora MySQL8.0Primary data store
CacheDynamoDBN/ASession storage
StorageS3N/AFile storage (runfiles, exports, media)
AuthAWS CognitoN/AUser authentication and SSO
QueueSQSN/AAsync job processing
FrontendVue.js3.xClient-side application

External Service Dependencies:

ServiceCriticalityFallback
Parser APICriticalImport fails, files queued in problem_files
DXAI CalibratorCriticalAnalysis fails, manual classification required
PusherNon-criticalPolling fallback for status updates
SendGridNon-criticalNotifications delayed, logged for retry
SentryNon-criticalErrors logged locally

Infrastructure Dependencies:

AWS ServicePurposeFailure Impact
LambdaComputeApplication unavailable
AuroraDatabaseFull outage
S3StorageImport/export unavailable
CognitoAuthLogin unavailable
CloudFrontCDNDegraded performance
Route 53DNSDomain resolution failure
SQSQueuingAsync jobs delayed

Related Architecture: See sdd-architecture.md for detailed AWS service rationale.


The following SRS requirements are implemented by the design described in this document:

Rules Engine Framework

RequirementDomainDescriptionRelevance
REQ-RULES-ENGINE-001Rules EngineConditional Rule Execution Based on MappingImplements the rule mapping architecture section - determines which rules execute based on Dye, Target Name, Mix, and Sample Type
REQ-RULES-ENGINE-002Rules EngineSequential Rule Execution OrderImplements the rule precedence execution described in Running Rules Architecture
REQ-RULES-ENGINE-003Rules EngineError/Warning Flow ControlImplements error detection, notification, and "Does prevent Analyse" logic described in Rule Application section

Run Import Process

RequirementDomainDescriptionRelevance
REQ-UPLOAD-001Upload RunsAccept Run File UploadsImplements the run file upload initiation shown in Run Import use case diagram
REQ-UPLOAD-002Validate Uploaded File TypesImplements file format validation (.sds, .ixo, .pcrd, .eds) described in Allowed File Types section
REQ-UPLOAD-005Upload RunsDisplay Upload ErrorsImplements error handling for duplicate files and parse errors described in Run Import
REQ-UPLOAD-007Upload RunsHandle Bulk UploadsImplements the valid/invalid file separation mechanism described in Run Import
REQ-FILEIMPORT-001File ImportImport Run Files from Monitored FolderImplements the monitored folder processing (toPcrai) described in Run Import
REQ-FILEIMPORT-002File ImportParse Thermocycler Data to Database VariablesImplements the Parser API integration and JSON conversion described in Parse Run File section
REQ-FILEIMPORT-003File ImportAnalyze Run Data Using DXAI AnalyserImplements the Run Analysis integration with external analyser described in Run Analysis section
REQ-FILEIMPORT-010File ImportManage Import Folder StructureImplements toPcrai, Processing, Problem_Files folder structure described in Run Import
REQ-FILEIMPORT-011File ImportPrevent Duplicate File ImportsImplements duplicate detection at parsing stage described in Parse Run File section

Configuration Import/Export

RequirementDomainDescriptionRelevance
REQ-CONFIGIO-001Config I/OGenerate Import Status ReportsImplements the import result status (Imported/Ignored) described in Configuration Upload
REQ-CONFIGIO-003Config I/OImport and Export Mixes and Targets ConfigurationImplements mix/target configuration import described in Configuration Upload
REQ-CONFIGIO-009Config I/OImport and Export Rules ConfigurationImplements rules precedence normalization described in Rule Mapping Architecture
REQ-CONFIGIO-011Config I/OValidate Westgard Limits on ImportImplements SD validation described in Westgard Rules section

Combined Outcomes

RequirementDomainDescriptionRelevance
REQ-RULES-COMBOUT-001Combined OutcomesEvaluate Well Results Against Combined Outcome ConfigurationsDirectly implements the Combined Outcomes Algorithm section - CLS combination matching and outcome determination
REQ-RULES-COMBOUT-002Combined OutcomesSupport Multi Mix Combined OutcomesImplements cross-mix outcome evaluation described in Combined Outcomes

Westgard Quality Control

RequirementDomainDescriptionRelevance
REQ-RULES-WG-004WestgardSet INVALID_SD Error for Invalid Standard DeviationImplements SD validation logic described in Westgard section
REQ-RULES-WG-006WestgardTrigger Westgard 1:2s RuleDirectly implements 1:2S rule algorithm with formula: delta = ABS(control.ct - mean); threshold = 2 * SD
REQ-RULES-WG-007WestgardTrigger Westgard 1:3s RuleDirectly implements 1:3S rule algorithm with formula: threshold = 3 * SD
REQ-RULES-WG-009WestgardTrigger Westgard 2:2s RuleDirectly implements 2:2S consecutive control rule described in Westgard Rules section
REQ-RULES-WG-011WestgardTrigger Westgard 7T RuleDirectly implements 7T trend detection rule described in Westgard Rules section

Kit Configuration

RequirementDomainDescriptionRelevance
REQ-KITCFG-024Kit ConfigurationManage Rule MappingsImplements the Rule Mapping configuration that feeds into Rule Mapping Architecture section
REQ-KITCFG-025Kit ConfigurationManage Rule SettingsImplements "Is Allow Error Wells" and precedence configuration described in Rule Mapping

Data Validation and Analysis

RequirementDomainDescriptionRelevance
REQ-ANALYZER-001AnalyzerConfigure Accession Validation EnforcementRelates to the data validation step in Run Analysis
REQ-REANALYZE-003ReanalyzeExclude Resolved Controls from Westgard CalculationsImplements "Westgard in error" resolution logic described in Westgard section
REQ-REANALYZE-008ReanalyzeUpdate Reanalysis Status for Westgard Series ChangesImplements reanalysis trigger logic related to Westgard control history

Error Codes

RequirementDomainDescriptionRelevance
REQ-ERRORCODES-001Error CodesGenerate Westgard Rule Error CodesImplements error code generation for Westgard QC violations (WG12S, WG13S, WG22S, WG7T)
REQ-ERRORCODES-002Error CodesGenerate Control Check Error CodesImplements BCC, BNC, BPEC error code generation for control validation
REQ-ERRORCODES-003Error CodesGenerate Inhibition Error CodesImplements INH, IC_FAILED error code generation for inhibition detection
REQ-ERRORCODES-010Error CodesGenerate Parsing Validation Blocking Error CodesImplements blocking errors at parsing stage (INVALID_PASSIVE_READINGS, THERMOCYCLER_UNKNOWN)
REQ-ERRORCODES-011Error CodesGenerate Analysis Validation Blocking Error CodesImplements blocking errors at analysis stage (SAMPLE_LABEL_IS_BAD, UNKNOWN_MIX)

Data Flow Diagrams

System Data Flow Overview

The PCR.AI system processes data through a unified pipeline from instrument files to LIMS export.

Level 0 - Context Diagram:

Level 1 - Major Processes:

Data Entry Points

Entry PointData TypeProcessing Destination
S3 toPcrai folderRun files (.sds, .ixo, .pcrd, .eds)Parse → Analyze pipeline
Browser uploadRun filesUploaded to S3 → same pipeline
Configuration uploadExcel (.xlsx)Kit Configuration import
API (optional)S3 API / AWS TransferDirect to toPcrai folder

Design Note: All run file data enters through the S3 toPcrai folder regardless of upload method. This unified entry point simplifies processing logic and audit tracking.

Data Exit Points

Exit PointData FormatTrigger
LIMS Export folderClient-configured formatManual export or auto-export on run complete
Browser downloadExcel/CSVUser-initiated export
Email notificationsHTML email with linksAlert thresholds, report ready
Auto reportsPDF/ExcelScheduled trends reports

Export Destination Configuration: Client environment setting determines whether LIMS exports go to S3 folder (for client sync) or browser download.

Critical Data Transformation Points

StageInputOutputComponent
ParseThermocycler fileStandardized JSONParser API (external)
Mix MatchJSON wellsWell records with mix assignmentLaravel job
CalibrateFluorescence curvesClassification parametersDXAI Calibrator API
RulesObservationsOutcomes, errors, warningsRules Engine
WestgardControl observationsQC violationsWestgard rules
StatusWell statesRun statusStatus calculator

Control Flow Diagram

Lambda/Queue Orchestration

PCR.AI uses Laravel Vapor's standard patterns for serverless deployment. There is no custom Lambda orchestration; all asynchronous processing uses Laravel's queue system.

Invocation Patterns:

PatternMechanismUse Cases
SynchronousAPI Gateway → Web LambdaUser requests, form submissions, data queries
AsynchronousSQS → Queue LambdaFile parsing, analysis, exports, notifications
ScheduledCloudWatch → Queue LambdaAlert evaluation, auto-reports
ChainedJob dispatches next jobParse → Analyze pipeline

Design Note: There is no direct Lambda-to-Lambda invocation. All inter-process communication uses Laravel's standard job dispatching to SQS.


Process Priority Table

Queue Configuration

The system uses a single SQS queue managed by Laravel Vapor with the following configuration:

PropertyValueDescription
Queue TypeStandardNot FIFO; allows parallel processing
Visibility Timeout900 seconds15 minutes for long-running jobs
Concurrency200Maximum concurrent queue workers
Processing OrderFIFO within queueStandard SQS ordering

Job Priority and Processing

All jobs share the same queue. Processing priority is implicit based on dispatch order (FIFO).

Job TypeTypical DurationDispatch Trigger
ParseRunFileJob10-30 secondsS3 upload event
AnalyzeRunJob30-120 secondsParse completion (chained)
ExportToLimsJob5-30 secondsUser action or auto-export
SendNotificationJob1-5 secondsVarious events
GenerateReportJob10-60 secondsUser action or schedule

Queue Configuration (from vapor.yml):

queue-timeout: 900      # 15 minutes max job duration
queue-concurrency: 200 # Max parallel workers

Design Notes:

  • No priority queues implemented; all jobs processed in dispatch order
  • Long-running jobs (analysis) may delay short jobs (notifications) during high load
  • Concurrency of 200 provides sufficient parallelism for typical workloads
  • Jobs exceeding 900 seconds timeout are failed and logged

Summary

This SDD document provides design-level detail for:

  • 34 SRS requirements across 9 domains
  • Core algorithmic implementations for rules execution, Westgard QC, and combined outcomes
  • Error code generation system with taxonomy, blocking logic, and propagation rules
  • File import pipeline from upload through analysis
  • Configuration import/export validation and processing
  • Data flow diagrams showing entry points, transformations, and exit points
  • Control flow diagrams for Lambda/SQS orchestration
  • Process priority and queue configuration
  • Real-time event broadcasting via Pusher

Real-Time Event Broadcasting (Pusher)

Overview

PCRI.AI uses Pusher for real-time event broadcasting to connected clients. Events are broadcast when significant state changes occur, enabling live UI updates without polling.

Broadcast Channels

ChannelScopeDescription
Run.AnalysePublicRun analysis lifecycle events
Run.CreatePublicRun file import completion
Run.CalibratePublicAssay calibration lifecycle
run-exportPublicLIMS export events
run-file-importPublicFile import progress
control-label-updatedPublicControl label configuration changes
notificationsPublicNotification read state
App.User.{userId}PrivateUser-specific events (auth, session)

Event Classes and Triggers

Run Lifecycle Events (Run.Analyse channel)

Event ClassTriggerPayload
RunAnalyzeStartedAnalysis job dispatchedrunId
RunAnalyseProgressUpdatedAnalysis progress checkpointrunId, percentage
RunUpdatedAnalysis completerunId
RunUpdateFailedAnalysis failedrunId, error details

Run Import Events

Event ClassChannelTrigger
RunCreateCompletedBroadcastRun.CreateRun file successfully imported
RunFileImportProgressBroadcastrun-file-importImport progress update

Export Events

Event ClassChannelTrigger
RunExportBroadcastrun-exportLIMS export initiated/completed

Calibration Events (Run.Calibrate channel)

Event ClassTrigger
AssayCalibrationStartedCalibration job started
AssayCalibrationCompletedCalibration finished

User Events (App.User.{userId} private channel)

Event ClassTrigger
UserLoggedInSuccessful authentication
OtherDeviceLogoutSession invalidated on other device
UserAccountDisabledAdmin disabled account
UserAccountDeletedAccount deleted

Configuration Events

Event ClassChannelTrigger
ControlLabelUpdateBroadcastcontrol-label-updatedControl label config saved
NotificationReadnotificationsUser marked notification as read

Broadcast Types

  • ShouldBroadcast: Queued broadcast (runs via queue worker)
  • ShouldBroadcastNow: Immediate broadcast (synchronous, used for progress updates)

Implementation

ComponentLocation
Run Analysis Eventsapp/Events/RunAnalyze*.php, app/Events/RunUpdated.php
Import Eventsapp/Events/RunCreateCompletedBroadcast.php, app/Events/RunFileImportProgressBroadcast.php
Export Eventsapp/Events/RunExportBroadcast.php
User Eventsapp/Events/UserLoggedIn.php, app/Events/OtherDeviceLogout.php, app/Events/UserAccount*.php
Calibration Eventsapp/Events/AssayCalibration*.php
Channel Authorizationroutes/channels.php

Design Notes

  • Progress events use ShouldBroadcastNow for real-time feedback (no queue delay)
  • User-specific events use private channels requiring authentication
  • Public channels do not contain sensitive data
  • Event payloads are minimal; clients fetch full data via API after receiving notification

Asynchronous Job Processing

Overview

PCRI.AI uses Laravel's queue system for asynchronous processing. Jobs handle long-running operations to maintain responsive user experience.

Queue Configuration

PropertyValueDescription
Default DriversyncEnvironment variable QUEUE_CONNECTION
Queue TablejobsFor database driver
Failed Jobs Tablefailed_jobsFailed job storage
Retry After90 secondsDatabase driver timeout

Scheduled Jobs

CommandSchedulePurpose
trends-report-alerts:schedule-checkEvery minuteEvaluate alert thresholds

Job Classes Inventory

Job ClassTriggerPurposeTimeoutRetry
RunDataParseJobDispatchNextFileToParsingActionParse run data from file, dispatch next file900s400 attempts
RunStoreJobRunStoreRequestHandlerCreate new run with wells from cached datadefaultdefault
RunAnalyseJob(Orphaned - not currently dispatched)Update run with analyzed datadefault3s release
RunUpdateJobRunUpdateRequestHandlerUpdate run with analyzed well data from cachedefaultdefault
ResolveRunJobResolveRunRequestHandlerApply resolution codes to wellsdefaultdefault
ResolutionConfirmJobResolutionConfirmationRequestHandlerApply confirmed resolution codesdefaultdefault
RunResultExportJobRunResultExportsControllerExport run results to cloud storagedefaultdefault
RunDeleteJobDeleteRunsActionDelete all runs for a sitedefaultdefault
AuditExportJobAuditExportsControllerExport audit logs to cloud storage900s1 attempt
ConfigDataImportJobConfigDataControllerImport kit configuration from Exceldefaultdefault
AssayCalibrationGetResultJobAssayCalibrateableRunsControllerRetrieve calibration resultsdefaultdefault
NormalizeWithDefaultExtractionSettingsJobUpdateClientConfigurationActionNormalize extraction settings after config updatedefaultdefault
NormaliseDataForControlLabelUpdateJobControlLabelsControllerUpdate control labels across sitedefaultdefault
UpdateDailyOutcomeTableJobMultiple actionsUpdate daily outcomes table for trends reportingdefaultdefault

Processing Patterns

Async Threshold Decision:

  • Operations with ≤100 wells or ≤50 future runs: Processed synchronously
  • Larger operations: Queued for async processing

Data Persistence Strategy:

  • Temporary run data stored in cache (run_jsons_to_queue)
  • UUID identifiers used to reference cached data
  • Jobs retrieve data via cache lookup

Event Broadcasting:

  • Real-time UI updates via Laravel Broadcasting
  • Events: RunCreateCompletedBroadcast, RunExportBroadcast, ControlLabelUpdateBroadcast

Failure Handling

JobFailure Behavior
RunDataParseJobAggressive retry (400 attempts)
AuditExportJobNo retry; logs via failed() callback
Most jobsDB transaction rollback; exception re-thrown

Implementation

ComponentLocation
Job Classesapp/Jobs/*.php
Queue Configurationconfig/queue.php
Schedule Definitionapp/Console/Kernel.php

Daily Outcomes Materialized Cache

The Trends Report relies on the daily_outcomes table, which stores pre-aggregated data as a materialized cache to improve query performance.

Purpose

This table serves as a materialized cache that aggregates well-level data to avoid expensive real-time queries on the large wells table when generating trends reports.

Data Population

The table is populated from well data with the following logic:

  • Source Data: Aggregates data from the wells table
  • Date Handling:
    • Extraction dates are converted to lab timezone and stored as date-only (time component removed)
    • Non-extraction controls (e.g., NTC, PCR controls) inherit the date from their run
    • This ensures all wells in a run are grouped by the same date in the lab's timezone
  • Aggregation: Groups wells by date, site, thermocycler, mix, outcome type, and role alias
  • Count: Stores the number of wells matching each unique combination

Table Schema

ColumnTypeDescription
dateDATEThe date of the outcome (lab timezone, date-only)
site_idFKLaboratory site identifier
thermocycler_idFKThermocycler identifier
mix_idFKMix identifier
outcome_idFK (nullable)Error code or LIMS status ID (NULL for "Control Passed")
role_aliasVARCHARSample type (Patient, QC, etc.)
countINTNumber of wells with this outcome
is_crossoverBOOLEANWhether this is a crossover control

Update Strategy

OperationComponentTrigger
Initial PopulationPopulateDailyOutcomesTableActionMigration or manual rebuild
Incremental UpdatesUpdateDailyOutcomesTableActionRun file processed
Background JobUpdateDailyOutcomeTableJobDispatched after each run upload
Full RegenerationMigrationData correction required

Performance Characteristics

  • Query Speed: Trends report queries execute in milliseconds instead of seconds/minutes
  • Storage Trade-off: Uses additional storage space for pre-computed aggregates
  • Synchronization: Automatically synchronized with run data via job dispatching

Query Algorithm

The report query follows this algorithm:

  1. Join daily_outcomes with sites, mixes, thermocyclers, and outcome tables
  2. Filter by selected parameters:
    • role_alias (Patient or QC)
    • thermocycler_ids
    • site_ids (when multiple sites enabled)
    • mix_ids
    • outcomes
    • date range
  3. Group by selected dimensions (site, date interval, thermocycler, mix, outcome)
  4. Calculate aggregates:
    • well_count: Sum of counts for each group
    • percentage: Percentage within each partition (mix/site/thermocycler/date)
  5. Order results by date

Aggregation Options ("Compare By")

When aggregation is enabled, data is combined:

OptionEffect
MixesCombine all selected mixes into a single trend line
ThermocyclersCombine all selected thermocyclers into a single trend line
OutcomesCombine all selected outcomes into a single trend line

Multiple aggregations can be enabled simultaneously.

Alert Types

Threshold TypeTrigger Condition
Count-BasedWell count exceeds configured numeric threshold
Percentage-BasedOutcome percentage exceeds configured percentage value

Alert Trigger Mechanisms

TriggerComponentSchedule
ImmediateNotifyTrendAlertsActionOn each run processed
PeriodicNotifyPeriodicTrendAlertsActionVia TrendsReportAlertsScheduleCheck command

Periodic alerts support:

  • Daily, weekly, and monthly schedules
  • Specific day and time configuration based on site timezone

Feature Import/Cascade System

Overview

The Feature Import System provides comprehensive validation and processing capabilities for importing feature configuration data via Excel files. The system implements a robust architecture ensuring data integrity, dependency validation, and proper cascade handling.

Architecture

FeaturesImportSheet (Excel Import Orchestration)

FeatureImportOrchestrator (Process Coordination)

├── FeatureValidator (Validation Logic) ──┐
├── FeatureUpdateService (Business Logic) │
└── FeatureCascadeService (Cascade Logic) │

FeatureParentService ──────────┘
(Shared Parent Chain Logic)

ToggleFeatureAction (Feature Toggle UI)

Component Responsibilities

ComponentLocationResponsibility
FeaturesImportSheetapp/Imports/Sheets/Excel import orchestration
FeatureImportOrchestratorapp/Imports/Sheets/Support/Features/Process coordination
FeatureUpdateServiceapp/Imports/Sheets/Support/Features/Update business logic
FeatureCascadeServiceapp/Features/Shared cascade dependency logic
FeatureParentServiceapp/Features/Shared parent chain logic
FeatureValidatorapp/Imports/Sheets/Support/Features/Validators/Validation orchestration

Validation Rules

Feature Code Validation

RuleDescription
RequiredFeature code cannot be empty
TypeMust be string
LengthMaximum 255 characters
ExistenceFeature must exist in system
Restrictionsuse_multiple_sites, preserve_s3_structure_for_first_site cannot be modified via import

Feature Value Validation

RuleDescription
Boolean onlyis_enabled field must be valid boolean
Supported formatstrue/false, 1/0, yes/no, TRUE/FALSE
Null handlingNull values treated as false
ConsistencyUses BooleanCaster for uniform handling

Parent Dependency Validation

The system implements recursive parent chain validation to ensure dependency integrity:

  • Only validates when ENABLING a feature (not when disabling)
  • Checks entire parent chain recursively up to root
  • Import-first logic: Checks parent status from import rows before database

Example Dependency Chain:

trends_report (root)

trends_report_builder (parent)

trends_report_ai_assistant (child)

Validation Scenarios:

ScenarioImport DataResult
Enable parent and child togethertrends_report=true, trends_report_builder=true✅ Success
Disable any featuretrends_report_ai_assistant=false✅ Success (no parent check)
Enable child with disabled parenttrends_report=false, trends_report_ai_assistant=true❌ Error
Enable child with disabled grandparenttrends_report=false, trends_report_builder=true❌ Error

Cascade Disable Pattern

The system implements a one-way cascade disable pattern:

ActionResult
Disable ParentAuto-disable all children (recursive)
Enable ParentChildren remain unchanged (manual control)

Cascade Example:

Before: trends_report = enabled
trends_report_builder = enabled
trends_report_ai_assistant = enabled

Action: trends_report = disabled

Result: trends_report_builder = disabled (cascaded)
trends_report_ai_assistant = disabled (recursively cascaded)

Dependency Configuration

Dependencies defined in AffectedFeatures::AFFECTED_FEATURES_WHEN_UPDATING:

[
'trends_report' => [
'trends_report_aggregate',
'trends_report_alerts',
'trends_report_builder',
],
'trends_report_builder' => [
'trends_report_ai_assistant',
],
]

Import Process Flow

1. EXCEL FILE PROCESSING
Read Excel file with headers
Process in chunks of 1000 rows
Map each row to collection with status tracking


2. VALIDATION PHASE
Feature Code Validation → Existence, restrictions, format
Feature Value Validation → Boolean format validation
Parent Dependency Validation → Recursive chain validation


3. UPDATE PHASE
Modification Check → Only update if value changed
Feature Update → Set new is_enabled value
Cascade Processing → Handle dependent feature changes


4. STATUS REPORTING
Success → "Feature: imported"
Validation Error → "Feature: ignored: [error details]"
Dependency Error → "Feature: ignored: Cannot enable '[child]' because parent feature '[parent]' is disabled"

Error Types

CategoryExample Message
Feature Code"Feature code is required", "Feature does not exist in the system"
Feature Value"Feature value must be a valid boolean (true/false, 1/0, yes/no)"
Parent Dependency"Cannot enable '[child]' because parent feature '[parent]' is disabled"
Restriction"Feature is restricted and cannot be modified through import"

Performance Considerations

  • Chunk Processing: Handles large files via 1000-row chunks
  • Efficient Lookups: Uses collection methods for feature matching
  • Minimal Database Queries: Loads features once per chunk
  • Recursive Optimization: Cascade only when necessary (disabled features)

Role Alias to Role Workflow

Overview

The Role Alias to Role feature creates a new Role from an existing control-label alias and migrates all dependent quality-control configuration. This workflow allows laboratories to promote an alias already in use on control labels into a fully-fledged role without manually recreating every dependent mapping.

Entry Points

ComponentLocation
API RoutePOST /api/role-alias-to-role
ControllerRoleAliasToRoleController
ActionMapRoleAliasToNewRoleAction::execute()

Request Parameters

ParameterDescription
role_aliasAlias shared by existing control labels
role_nameTarget name for the new role
role_idIdentifier of currently mapped role that owns the alias

Authentication required; executing user provides site context via getLoggedInSiteId().

Transactional Workflow

MapRoleAliasToNewRoleAction runs inside a single database transaction to ensure atomicity:

1. GATHER SOURCE DATA
Load all control labels matching alias and original role
Include source role's metadata (type, extraction flag, resolution priority)
Fetch associated rule mappings and combined outcome records scoped to user's site


2. CREATE TARGET ROLE
Clone key fields from source role
Use authenticated user's site
Preserve type, has_extraction, resolution_priority


3. RELINK CONTROL ECOSYSTEM
Update matched control labels → reference new role
Update wells → reference new role
Restore soft-deleted Westgard limits for alias
Reassign control range settings tied to affected targets


4. DUPLICATE RULE CONFIGURATION
Insert copies of matching RuleMapping records with new UUIDs
Keep original created/updated timestamps
Remove original mappings once ALL aliases tied to old role migrated


5. CLONE COMBINED OUTCOMES
Create new outcome records per original
Create new mix-result and target-result rows
Suffix names and codes with <new-role-name>-clone
Set new created_at/updated_at timestamps


6. COMMIT OR ROLLBACK
Success → DB commit
Any exception → Transaction rollback, original config untouched

Data Dependencies

EntityPurpose
ControlLabelSource of alias, provides mix_id for target scoping
WellUpdates ensure historical well data references new role
RuleMapping / TargetCopied to retain rule behavior for mixes tied to alias
WestgardLimit / ControlRangeSettingQC thresholds and per-target ranges reactivated for new role
OutcomeToLimsStatusMappingDuplicated to maintain combined outcome logic

Combined Outcome Duplication Details

PropertyHandling
UUIDNew ordered UUID generated
CodeSuffixed with -<role_name>-clone
NameSuffixed with -<role_name>-clone
Mix/Target ResultsCloned with fresh identifiers
Behavioral flagsPreserved from original
TimestampsSet to current time during duplication

Role Cleanup Logic

deleteOriginalRuleMappings() runs only after all control labels linked to the original role have been remapped. This prevents accidental loss of rules while aliases remain.

Implementation

ComponentLocation
Controllerapp/Http/Controllers/RoleAliasToRoleController.php
Actionapp/Actions/ControlLabels/MapRoleAliasToNewRoleAction.php
Teststests/Feature/ControlLabels/RoleAliasToRoleTest.php

RequirementDomainDescriptionRelevance
REQ-FILEIMPORT-001File ImportImport Run Files from Monitored FolderRunDataParseJob, RunStoreJob
REQ-AUDIT-003Audit LogExport Audit DataAuditExportJob
REQ-CONFIGIO-001Config I/OGenerate Import Status ReportsConfigDataImportJob

Developer Documentation Cross-References

The following developer-focused documentation in code/docs/ provides additional implementation details beyond this SDD:

Code Docs FileSDD CoverageDeveloper Content
features/run-analyzer-system.mdPartialCreateRunAction 10-step workflow, UpdateRunAction details, Normalizer architecture
features/trends-report.mdAddedAdditional filter logic, API endpoints, chart visualization
features/outcomes-report.mdNot in SDDWell query algorithm, pagination strategy, export streaming patterns
testing-guidelines.mdNot in SDDTesting strategy breakdown (Unit/Feature/E2E/Integration)
laravel-boost-guidelines.mdNot in SDDLaravel coding standards and conventions

Note: The code/docs layer serves as a "living developer guide" with implementation-specific details, while this SDD provides IEEE 1016-1998 compliant design specifications.