Skip to main content
Version: 3.0.1

SRS Confluence Conversion Guide

1. Overview

This guide documents the end-to-end process for converting Confluence requirement exports into IEEE-style SRS (Software Requirements Specification) documents. The conversion pipeline transforms messy, format-inconsistent exports into structured, testable, UI-stable requirements documentation.

When This Applies

  • New features documented in Confluence that need SRS representation
  • Partially-converted domains from the 3.0.0 conversion that need refinement
  • Legacy imports where historical Confluence content must be brought into the canonical format

The Pipeline

The full conversion follows six stages:

Confluence Export --> Pandoc Conversion --> Source Analysis --> Classification --> Transformation --> Validation
  1. Confluence Export -- Export page tree from Confluence as Word (.docx)
  2. Pandoc Conversion -- Convert .docx to AsciiDoc (.adoc) with image extraction
  3. Source Analysis -- Detect format, inventory content, count requirements
  4. Classification -- Assign each item to exactly one taxonomy category
  5. Transformation -- Convert classified items into canonical SRS structure
  6. Validation -- Run four levels of automated checks, integrate into Docusaurus

This guide focuses on the conversion pipeline -- the process of taking raw Confluence exports and producing canonical SRS files. It does not duplicate taxonomy or format content:

  • SRS Authoring Guide -- Classification taxonomy (Section 2), document format (Sections 3-4), operations for adding/modifying/deprecating requirements (Section 5)
  • Contributor Guide -- Docusaurus mechanics, sidebar configuration, linking, validation commands

2. Exporting from Confluence

Before the classification and transformation pipeline can begin, the raw Confluence content must be exported and converted into a machine-readable format.

2.1 Export from Confluence as Word

  1. Navigate to the Confluence space containing the requirements page tree
  2. Select Space Tools > Content Tools > Export
  3. Choose Word (.docx) as the export format
  4. Select the page tree to export (e.g., "3.0.0-Product Requirements")
  5. Download the resulting .docx file
warning

Do not export as PDF or HTML. Word (.docx) is the only format that preserves table structure and embedded images in a way that Pandoc can reliably convert. Confluence's native HTML export loses table cell relationships, and PDF is not machine-readable.

2.2 Convert Word to AsciiDoc with Pandoc

The repository includes a conversion script that handles the docx-to-AsciiDoc conversion with proper image extraction and formatting cleanup:

./docusaurus/scripts/docx2adoc.sh "input/export-file.docx" "output/requirements-export.adoc"

Both the script and its Lua filter live in docusaurus/scripts/. The script runs Pandoc with a Lua filter (strip-formatting.lua) that:

  • Strips image attributes -- Removes width/height from images (preserves the image itself)
  • Unwraps spans -- Removes Confluence's span wrappers, keeping content
  • Strips SmallCaps -- Removes decorative formatting
  • Cleans header attributes -- Removes Confluence-generated header IDs

The script also extracts all embedded images to a media/ directory alongside the output file.

Why AsciiDoc, not Markdown? Confluence exports contain complex tables (merged cells, nested content, multi-line cells). Pandoc's AsciiDoc writer preserves table structure faithfully -- merged cells appear as blank rows that are easy to identify. Pandoc's Markdown writer collapses table structure, making it difficult to reconstruct the original content programmatically.

2.3 Verify the Export

After conversion, verify the export is usable:

# Count sections (should roughly match Confluence page count)
grep -c "^== " output/requirements-export.adoc

# Check images were extracted
ls output/media/ | wc -l

# Spot-check a known section
grep -n "3.0.0-Runfile List" output/requirements-export.adoc

The resulting .adoc file is the input to the classification and transformation pipeline described in the following sections.


3. Source Material

3.1 Source Formats

Confluence exports appear in three distinct formats. Identifying the format is the first step of conversion.

Format A -- Simple Anchor:

### The system shall {behavior} \{#REQ-XXX-NNN}

**Description**
...
**Acceptance Criteria**
- bullet points

Format A uses the requirement statement as the heading text with an anchor tag appended. Description and ACs follow as bold-labeled sections with bullet lists.

Format B -- Table Properties:

### REQ-XXX-NNN: {Title} \{#REQ-XXX-NNN}

| Property | Value |
|----------|-------|
| ID | REQ-XXX-NNN |

#### Acceptance Criteria
| ID | Criterion | Verification |

Format B separates the ID and title, includes a properties table, and uses table-formatted acceptance criteria with explicit IDs.

Format C -- Nested Headings:

#### The system shall {behavior} \{#REQ-XXX-NNN}

**Description**
...

Format C is similar to Format A but uses deeper heading levels (H4 instead of H3).

tip

Most domains use a single format consistently, but some mix formats. Detect the format for each requirement individually.

3.2 Source File Locations

Source TypeLocationDescription
Restructured exportsoutput/pilot/restructured/{domain}.mdPrimary conversion input (historical)
SDD companion filesoutput/pilot/restructured/sdd/{domain}-design.mdUI specifications, layout details -- content demoted to UI Detail
New Confluence exportsProvided by contributorFuture conversion input

3.3 SDD Companion Files

SDD (Software Design Document) companion files contain UI-specific content that must be demoted during conversion:

  • UI specifications by requirement ID
  • Layout details and visual behavior descriptions
  • Interaction mechanics (hover states, transitions, animations)

This content is moved to the UI Notes (Illustrative) section in the output. It never becomes part of a Requirement statement or Acceptance Criteria.

3.4 Pre-Conversion Inventory

Before starting conversion, complete this inventory:

  1. Read the source file completely -- understand what exists
  2. Read the SDD file (if it exists) -- identify UI content for demotion
  3. Count source requirements -- record this number to verify none are lost
  4. Identify the source format (A, B, or C) for each requirement
  5. Note existing Gherkin tests -- these are preserved in the Acceptance Tests section
  6. Note existing definitions/glossary -- these are preserved in the Definitions section

4. Classification Process

4.1 Taxonomy Reference

The classification taxonomy (five categories, decision key, prohibited patterns, language rules) is documented in SRS Authoring Guide Section 2. Do not duplicate that content here.

The key question for every item: "Is this independently testable system behavior?"

4.2 Classification Decision Tree

For each item extracted from the source, apply this decision tree:

Is it independently testable system behavior?
+-- YES --> Is it UI mechanics (layout, color, icons)?
| +-- YES --> UI Detail (non-normative)
| +-- NO --> Requirement (or AC if it constrains a parent requirement)
+-- NO --> Does it constrain a requirement?
+-- YES --> Is it a variable parameter?
| +-- YES --> Configuration Option
| +-- NO --> Acceptance Criteria (under parent requirement)
+-- NO --> Notes

4.3 Source-to-Target Mapping

Source ContentTarget SectionClassification Rule
"The system shall..." statementRequirementsCore capability
Acceptance criteria bulletsRequirements, AC tableTestable conditions
Validation rules, limits, constraintsRequirements, AC tableTestable conditions (include as ACs)
Default values, toggles, parametersConfiguration OptionsVaries by deployment
Layout, icons, colors, tooltipsUI DetailPresentation mechanics
Column order, sticky headersUI DetailPresentation mechanics
Button labels, modal textUI DetailLiteral strings
Historical context, migration notesNotesNon-binding explanation
Gherkin scenariosAcceptance TestsTest specifications
Term definitionsDefinitionsDomain vocabulary

4.4 Consolidation Detection

Detect and consolidate "requirement explosion" patterns where multiple requirements describe variants of a single capability.

Signals that consolidation is needed:

  • Multiple requirements with the same domain prefix (REQ-XXX-001, 002, 003...)
  • Requirements that could share one statement with different parameters
  • Requirements describing "how" rather than "what"
  • Requirements that are implementation variants, not distinct capabilities

When to consolidate:

PatternExampleAction
Variants of one capabilityFilter types (text, date, dropdown)Consolidate to single "Filter data" requirement
Differ only by input typeExact match vs partial matchDemote variants to Acceptance Criteria
Interaction conveniencesReset, clear, show/hideDemote to Acceptance Criteria
Configurable options as requirements"Default is X" as separate requirementMove to Configuration Options

4.5 Consolidation Rules

  1. Identify candidates: Same domain, variant pattern
  2. Write capability-level requirement: One statement describing the core capability
  3. Demote variants:
    • Behavioral constraints --> Acceptance Criteria
    • Deployment-variable parameters --> Configuration Options
    • Testable conditions --> Acceptance Criteria
  4. Transform UI-centric language: See AC Transformation table below
  5. Document in Reviewer Notes: Full reversibility record (see Section 3.7)
warning

Never lose requirements during consolidation. Every original item must be accounted for in the Reviewer Notes consolidation table -- either as a merged item, a demoted AC, a Configuration Option, or a UI Detail. The consolidation must be fully reversible from the documentation alone.

4.6 AC Transformation Rules

Acceptance criteria must describe behavior, not UI mechanics. When consolidating or converting, transform UI-centric language:

UI-Centric (Wrong)Behavior-Centric (Correct)
"Click Apply button""User confirms selection"
"Dropdown is hidden""Selection interface closes"
"Calendar shows one month""Date selection scope is one month"
"Checkbox is checked""Option is selected"
"Displayed under filter name""Selection state is visible"
"Button is greyed out""Action is disabled"
"Hover over element""User indicates focus on element"
"Text box is cleared""Input is reset to empty"
"Value displayed under filter name""Current selection is indicated"

4.7 Reviewer Notes Documentation

Reviewer Notes are required for every consolidation. They must contain:

  1. A consolidation table showing original items and their disposition
  2. Rationale for the consolidation
  3. Reversibility references (source file path, Confluence row numbers)
## Reviewer Notes

### Consolidation: REQ-FILTERS-001

The following items from the source were consolidated into a single
capability-level requirement:

| Original Item | Source Reference | Disposition |
|---------------|------------------|-------------|
| REQ-FILTERS-001 (Exact match) | 3.0.0-Filters Rows 1-4 | Merged -> REQ-FILTERS-001 |
| REQ-FILTERS-002 (Partial match) | 3.0.0-Filters Rows 5-7 | Demoted -> AC-07 |
| REQ-FILTERS-003 (Date filter) | 3.0.0-Filters Rows 8-19 | Demoted -> AC-08 |
| REQ-FILTERS-004 (Dropdown) | 3.0.0-Filters Rows 20-24 | Demoted -> AC-09 |
| REQ-FILTERS-005 (Dynamic refinement) | 3.0.0-Filters Row 25 | Merged -> AC-05 |
| REQ-FILTERS-006 (Reset) | 3.0.0-Filters Row 26 | Merged -> AC-03 |
| REQ-FILTERS-007 (Show/hide) | 3.0.0-Filters Rows 27-31 | Merged -> AC-04 |

**Rationale:** These items represented variants of a single filtering
capability, not separate system responsibilities. Consolidation aligns
with Phase 0 guidance on avoiding requirement explosion for UI mechanics.

**Reversibility:** To restore original structure, reference:
- Source: `output/pilot/restructured/filters.md`
- Confluence: 3.0.0-Filters (Rows 1-31)

5. Transformation Process

This section describes the five-phase transformation in detail. This is the core of the conversion pipeline.

Phase 1: Analysis

1.1 Detect Source Format

Scan the source file and identify whether each requirement uses Format A (simple anchor), Format B (table properties), or Format C (nested headings). See Section 3.1 for format definitions.

1.2 Inventory Source Content

For the source file, identify:

  • Total requirement count
  • Existing Gherkin tests (typically in an appendix)
  • Existing definitions/glossary
  • Existing notes sections
  • Format used (A, B, or C) for each requirement

1.3 Inventory SDD Content (if exists)

For the SDD companion file, identify:

  • UI specifications by requirement ID
  • Layout details
  • Visual behavior descriptions
  • Interaction mechanics

All SDD content is destined for the UI Notes (Illustrative) section.

Phase 2: Extract and Classify

For each item in the source, classify into a target section using the Classification Decision Tree and Source-to-Target Mapping.

During this phase, also detect consolidation opportunities per Section 4.4.

Phase 3: Transform Requirements

3.1 Extract Core Fields

Source FieldTarget FieldTransformation
Heading textStatementRemove ID anchor, ensure "shall"
{#REQ-XXX-NNN}IDExtract to attribute table
Description paragraphStatementMerge with heading if concise
Acceptance criteria (bullets)AC tableConvert to AC-NN format
Acceptance criteria (table)AC tablePreserve, renumber if needed
Priority (if present)PriorityPreserve
Notes sectionNotes or UI DetailClassify per decision tree
Source blockTraceabilityStructure into table

3.2 Add Missing Fields

FieldDefaultAction
PriorityMEDIUMSet default if not specified in source
StatusDraftAlways Draft for initial conversion
VerificationTestDefault unless obviously Inspection or Analysis

3.3 Infer Error Handling

Analyze each requirement for implicit error conditions.

Triggers that suggest error handling is needed:

Trigger WordError Question
"validates", "checks"What if validation fails?
"imports", "uploads"What if file is invalid/corrupt?
"saves", "stores"What if save fails?
"displays list"What if list is empty?
"connects to"What if connection fails?
"retrieves", "fetches"What if data not found?

If error handling is inferable, add an Error Handling table:

**Error Handling**

| Condition | Response |
|-----------|----------|
| {Inferred condition} | The system shall {reasonable response} |

If unclear, create an Open Question:

**Open Questions**

| ID | Question | Owner | Status |
|----|----------|-------|--------|
| OQ-01 | What should happen when {condition}? | TBD | Open |

If no error conditions apply (e.g., display-only requirement, global error handling covers it), omit the Error Handling section entirely. Do not add "N/A".

3.4 Extract Assumptions

Look for assumptions embedded in:

  • Description text ("Super Admin users can...")
  • Notes ("Requires feature X...")
  • Acceptance criteria ("When user is authenticated...")

If an assumption applies to a single requirement, add it to that requirement. If it applies to ALL requirements in the domain, add it to the domain-level Assumptions section.

Phase 4: Build Domain Sections

4.1 Overview

Write 1-2 paragraphs describing:

  • What capability space this domain covers
  • What the system is responsible for (not how it is presented)

If the source has an existing overview/introduction, adapt it. Remove any UI-specific language. Add a Product Context subsection if the domain interfaces with external systems.

4.2 Definitions

Extract domain-specific terms from:

  • Existing definitions tables in source
  • Terms used in requirements that need clarification

Do not repeat global glossary terms from introduction.md.

4.3 Assumptions

Collect assumptions that apply to ALL requirements:

  • Role requirements ("Users have Junior User role or higher")
  • System state ("At least one mix is configured")
  • External dependencies ("Network connectivity available")

4.4 Configuration Options

Extract configurable parameters:

## Configuration Options

| Option | Default | Description | Affects |
|--------|---------|-------------|---------|
| {name} | {value} | {Description} | REQ-{DOMAIN}-{NNN} |

Include default values mentioned in requirements, toggles and feature flags, limits and thresholds.

4.5 UI Detail (Non-normative)

Collect all UI specifications from:

  • Source Notes sections containing layout/visual details
  • SDD companion file content
  • Presentation details embedded in acceptance criteria (after extraction)

Group by requirement ID:

## UI Detail (Non-normative)

### REQ-{DOMAIN}-{NNN} UI Specifications

- {Layout detail}
- {Visual behavior}
- {Interaction mechanic}

4.6 Notes

Collect non-binding content: historical context, migration notes, implementation hints (non-normative), references to external documentation.

4.7 Traceability Matrix

Generate a summary table with one row per requirement:

## Traceability Matrix

| Requirement | Title | Implementation | Test Cases | Status |
|-------------|-------|----------------|------------|--------|
| REQ-{DOMAIN}-001 | {Title} | [TBD] | [Pending] | Draft |
  • Implementation: Leave as [TBD] unless known
  • Test Cases: Extract from source if present, else [Pending]
  • Status: All Draft for initial conversion

4.8 Acceptance Tests

Preserve existing Gherkin scenarios from the source appendix. Update requirement ID references if they changed during consolidation.

4.9 Completion Checklist

Add the standard checklist (unchecked for initial conversion):

## Completion Checklist

- [ ] All requirements are capability-level (describe behavior, not UI)
- [ ] UI details are fully demoted to non-normative section
- [ ] Configuration options are not encoded as requirements
- [ ] Every requirement has acceptance criteria and source traceability
- [ ] Error handling addressed for I/O, validation, and external system requirements
- [ ] Open questions documented with owners assigned
- [ ] Module can survive a full UI redesign unchanged
- [ ] Traceability matrix is complete

Phase 5: Validate and Write

5.1 Pre-Write Validation

Before writing output, verify:

  • Requirement count matches source (consolidated items accounted for in Reviewer Notes)
  • All requirement IDs are unique
  • All requirements have: ID, Priority, Status, Verification, Statement, AC, Traceability
  • No {#REQ-XXX} anchor syntax remains in headings
  • No #### headings used for sections (only for individual requirements)
  • UI mechanics are NOT in requirement statements
  • Configuration values are NOT in requirement statements
  • Gherkin tests preserved
  • Consolidations documented in Reviewer Notes

5.2 Write Output

Write the converted file to the appropriate output location:

  • Historical conversions: output/srs/{domain}.md
  • Current canonical location: docusaurus/docs/srs/{domain}.md (see Contributor Guide Section 3 for directory layout)

6. Transformation Rules

These deterministic rules govern how source items are converted. They are subordinate to the classification taxonomy and Phase 0 prohibited patterns.

6.1 Grouping Rules (Capability Level)

RuleDescription
G-1Capability sections are created from top-level Confluence headings (e.g., "3.0.0-Runfile Report")
G-2A capability section may consolidate multiple Confluence headings if they represent the same functional area and no prohibited pattern is introduced
G-3Report/list/screen names are capability labels, not Requirements

6.2 Requirement Creation Rules

RuleDescription
R-1Create a Requirement only when the item describes a distinct, externally observable capability
R-2If multiple items refer to the same capability, merge into a single Requirement and move specifics into Acceptance Criteria or Refinements
R-3Export/Import/Print features are separate Requirements only if they introduce new capability beyond an existing Requirement
R-4If a requirement cannot be expressed in normative form without introducing design/implementation, it is not a Requirement

6.3 Refinement Rules

RuleDescription
RF-1Variants (single-site vs multi-site, mode-specific behavior) become Refinements of the parent Requirement
RF-2Client-specific formats or deployment variants become Refinements or Configuration Options under the relevant Requirement

6.4 AC Rules

RuleDescription
AC-1UI behaviors (sorting, filtering, pagination, sticky headers, column visibility) are Acceptance Criteria if they validate the parent Requirement without changing scope
AC-2Table mechanics and layout behaviors are Acceptance Criteria only if they describe observable outcomes
AC-3If a criterion would narrow or expand scope beyond the Requirement, it must be reclassified or the Requirement must be rewritten
AC-4Acceptance Criteria shall not narrow or expand scope beyond their parent Requirement

6.5 UI Detail Rules

RuleDescription
UI-1Visual presentation, layout mechanisms, and interaction affordances become UI Details in Notes or Acceptance Criteria
UI-2UI Details never become Requirements

6.6 Configuration Option Rules

RuleDescription
CO-1Items tied to configuration toggles or client-specific settings become Configuration Options under the relevant Requirement
CO-2Configuration Options do not become Requirements

6.7 Source Preservation Rules

RuleDescription
SP-1Every Requirement includes a Source block listing all originating Confluence IDs and traceability fields (Confluence, Jira, Tests, Design)
SP-2Every non-Requirement item remains traceable via either: inclusion in Acceptance Criteria or Notes with a Source block, or mapping in the traceability appendix if absorbed
SP-3No item is dropped. Reclassification is mandatory.

6.8 Error Handling Scope

Error Handling section is required only when the requirement introduces domain-specific error behavior.

Require Error Handling when:

  • Requirement describes file upload/import (domain-specific validation errors)
  • Requirement describes external system integration (connection failures)
  • Requirement describes user input validation (domain-specific rules)
  • Requirement describes calculation with edge cases (overflow, division by zero)

Do NOT require Error Handling when:

  • Requirement is purely display/read-only (global error handling applies)
  • Requirement is configuration (no runtime errors specific to this requirement)
  • Requirement relies on global error behavior documented elsewhere
  • Error behavior is identical to system-wide default

6.9 Uncertainty Markers

When the conversion cannot confidently fill a section or field, use [REVIEW REQUIRED] markers.

When to use markers:

SituationAction
No domain-specific terms foundAdd marker in Definitions section
Assumptions inferred, not explicitAdd marker in Assumptions section
Config options unclearAdd marker in Configuration Options section
UI detail not clearly separableAdd marker in UI Detail section
Error handling uncertainAdd marker OR Open Question in requirement
Source traceability missingAdd [Missing - requires investigation] in Traceability
Priority/verification unclearAdd marker in requirement Notes

Marker format:

[REVIEW REQUIRED: {Brief description of what needs human review}]

Rules for markers:

  1. Always use markers rather than guessing or omitting content
  2. Keep marker text brief but specific about what needs review
  3. Markers go at the TOP of the section they affect
  4. Multiple markers in one file is acceptable and expected
  5. Markers should NOT prevent completion -- they flag, not block

7. Validation

Converted files are validated against 44 checks across four severity levels:

LevelNameSeverityCount
1StructureBlocking8 checks
2CompletenessRequired14 checks
3QualityAdvisory15 checks
4UI StabilityCritical7 checks

A successful conversion must pass all Level 1 and Level 2 checks. Level 3 findings are advisory. Level 4 violations require review.

For the full check catalog, detection word lists, report format, and integration workflow, see the Confluence Conversion Validation Spec.


8. Edge Cases

8.1 Duplicate IDs in Source

  1. Keep first occurrence as-is
  2. Renumber duplicates: REQ-XXX-NNN --> REQ-XXX-NNNa
  3. Add Open Question flagging for resolution

8.2 Missing Traceability

Add to traceability:

| Source | [Missing - requires investigation] |

Add Open Question:

| OQ-01 | Source traceability missing - verify origin | TBD | Open |

8.3 Requirements That Are Actually UI Detail

If a "requirement" only describes presentation:

  1. Do NOT include as a requirement
  2. Move content to the UI Detail section
  3. Add Open Question:
| OQ-01 | This appears to be UI detail, not a requirement - confirm or identify
underlying behavior | TBD | Open |

8.4 Overly Long Requirements (>10 ACs)

  1. Flag as a potential candidate for splitting
  2. Preserve all ACs for now
  3. Add Note: "Consider decomposition -- high AC count ({n} criteria)"

8.5 Mixed Behavior/UI Requirements

  1. Extract behavior portion to Statement
  2. Move UI portion to UI Detail section
  3. Keep traceability pointing to original source

8.6 Requirement Count Changes

Requirement count changes are expected during conversion due to consolidation of duplicates, demotion of UI-only "requirements", and splitting of compound requirements.

ChangeThresholdAction
Count unchanged0%Pass
Count increasedAnyFlag for review -- verify splits are intentional
Count decreased10% or lessPass with note
Count decreasedMore than 10%Flag for review -- justification required in Notes
Count decreasedMore than 50%Q-14 triggered -- verify Reviewer Notes has consolidation documentation

9. Worked Examples

9.1 Single Requirement Conversion (REQ-UPLOAD-005)

Source (Format A):

### The system shall display an error message when file upload fails \{#REQ-UPLOAD-005}

**Description**

When a file upload fails, the system displays an appropriate error message.

**Acceptance Criteria**

- Error message displayed in red
- Message includes reason for failure
- User can retry the upload

**Notes**

- Common failures: file too large, invalid format, network timeout
- Error icon appears next to message

Confluence Sources:

  • 3.0.0-Upload (Row 5)

Jira: BT-555 Tests: BT-890

Output (canonical format):

#### REQ-UPLOAD-005: Display Upload Error Feedback

| Attribute | Value |
|-----------|-------|
| ID | REQ-UPLOAD-005 |
| Priority | MEDIUM |
| Status | Draft |
| Verification | Test |

**Statement**

The system shall display an error message when a file upload operation fails.

**Acceptance Criteria**

| ID | Criterion |
|----|-----------|
| AC-01 | The error message shall include the reason for failure |
| AC-02 | The system shall provide a retry option |

**Error Handling**

| Condition | Response |
|-----------|----------|
| File exceeds size limit | The system shall display "File too large. Maximum size is {limit}." |
| Invalid file format | The system shall display "Invalid format. Accepted: JSON." |
| Network timeout | The system shall display "Upload timed out. Please try again." |

**Traceability**

| Type | Reference |
|------|-----------|
| Source | 3.0.0-Upload (Row 5) |
| Jira | BT-555 |
| Tests | [Pending] |

Moved to UI Detail section:

### REQ-UPLOAD-005 UI Specifications

- Error message displayed in red text
- Error icon appears next to message

What moved and why:

  • "Error message displayed in red" is a UI detail (color/presentation), not a requirement. The requirement is that an error message is shown; how it is styled is non-normative.
  • "Error icon appears next to message" is a UI detail (icon placement). Moved to UI specifications.
  • "User can retry the upload" was transformed from a UI mechanic ("Click retry button") to behavior ("The system shall provide a retry option").
  • Error handling was inferred from the Notes section listing common failures, then structured into a proper Error Handling table.

9.2 Consolidation Example (REQ-FILTERS)

Before (7 requirements):

REQ-FILTERS-001: Exact match filter
REQ-FILTERS-002: Partial match filter
REQ-FILTERS-003: Date filter
REQ-FILTERS-004: Dropdown filter
REQ-FILTERS-005: Dynamic refinement
REQ-FILTERS-006: Reset filters
REQ-FILTERS-007: Show/hide filters

After (1 requirement):

#### REQ-FILTERS-001: Filter Data by Criteria

| Attribute | Value |
|-----------|-------|
| ID | REQ-FILTERS-001 |
| Priority | MEDIUM |
| Status | Draft |
| Verification | Test |

**Statement**

The system shall allow users to constrain displayed data using filter criteria.

**Acceptance Criteria**

| ID | Criterion |
|----|-----------|
| AC-01 | Filter criteria shall constrain the dataset deterministically |
| AC-02 | Multiple filter criteria may be combined |
| AC-03 | Clearing criteria shall restore the unfiltered dataset |
| AC-04 | Filter state shall be preserved when toggling filter visibility |
| AC-05 | Available filter values shall update based on current selections in other filters |
| AC-06 | Supported filter types shall include: text, date, categorical |
| AC-07 | Text filters shall support exact match (full equality) and prefix match (starts-with) modes |
| AC-08 | Date filters shall support single date or contiguous date range selection |
| AC-09 | Categorical filters shall support multi-select from available values |

Configuration Options extracted:

| Option | Default | Description | Affects |
|--------|---------|-------------|---------|
| default_filter_visibility | per-screen | Default visibility state for filter section | REQ-FILTERS-001 |
| default_text_match_mode | prefix | Default matching mode for text filters | REQ-FILTERS-001 |
| dynamic_refinement_enabled | true | Whether filters auto-refine based on other selections | REQ-FILTERS-001 |

Key decisions:

  • 7 separate "requirements" were all variants of a single filtering capability
  • Exact/partial/date/dropdown filter types became AC-06 through AC-09
  • Dynamic refinement became AC-05
  • Reset became AC-03
  • Show/hide became AC-04
  • Deployment-variable parameters (default visibility, match mode, refinement toggle) became Configuration Options
  • Full Reviewer Notes documentation was added (see Section 4.7 for the complete example)

10. Conversion Status

All 28 domains from the 3.0.0 release have been converted. For the full conversion results table, batch conversion order, and pilot lessons learned, see Conversion History in the validation spec.


11. Quick Reference

Pipeline Steps Summary

StepInputOutputKey Action
1. AnalysisSource file + SDDInventoryDetect format, count requirements
2. ClassificationInventoryClassified itemsApply decision tree to each item
3. TransformationClassified itemsCanonical sectionsTransform fields, infer error handling, extract assumptions
4. ValidationConverted fileValidation reportRun S/C/Q/U checks
5. IntegrationValidated fileDocusaurus docsWrite, register sidebar, verify build

Default Field Values

FieldDefault ValueWhen to Override
PriorityMEDIUMSource specifies HIGH or LOW
StatusDraftNever override during initial conversion
VerificationTestObviously Inspection (documentation check) or Analysis (calculation proof)
Implementation[TBD]Code location already known
Test Cases[Pending]Existing Gherkin tests found in source

Validation Level Summary

LevelNameSeverityCheck IDsCount
1StructureBlockingS-01 to S-088
2CompletenessRequiredC-01 to C-1414
3QualityAdvisoryQ-01 to Q-1515
4UI StabilityCriticalU-01 to U-077
Total44

Key Commands

# Count requirements in a source file
grep -c "^### \|^#### " output/pilot/restructured/{domain}.md

# Count requirements in a converted file
grep -c "^#### REQ-" output/srs/{domain}.md

# Count requirements in Docusaurus SRS files
grep -c "^## REQ-\|^### .*(REQ-" docusaurus/docs/srs/*.md \
| awk -F: '{sum+=$2} END {print "Total:", sum}'

# List SDD companion files
ls output/pilot/restructured/sdd/*-design.md

# Verify file counts
ls docusaurus/docs/srs/*.md | wc -l # 29 (including introduction)
ls docusaurus/docs/srs/rules/*.md | wc -l # 61 (including index)

# Build and verify links (authoritative check)
cd docusaurus && npm run build 2>&1 | grep -c "couldn't be resolved" # 0 = success

# Validate Mermaid diagrams
bash docusaurus/scripts/validate-mermaid.sh docusaurus/docs

# Add REQ anchors (idempotent)
python docusaurus/scripts/add-req-anchors.py --dry-run # Preview
python docusaurus/scripts/add-req-anchors.py # Apply