MedVertical
Records

Continuous validation. Drift detection. Release evidence.

Records deploys adjacent to your FHIR server and answers the question your CDR can't: “Is this data still conformant?”

Quality gate showing pass/fail status for development environment

Release Safety Gate

Validate before pushing an IG/Profile update, FHIR server upgrade, or mapping change to production.

What Records does

Four things Records does for your FHIR infrastructure.

Validates continuously

Runs profiles, terminologies, and custom rules against live FHIR data. Every run produces a deterministic PASS / WARN / FAIL signal.

Detects drift

Compares each validation run against baselines. Surfaces regressions from server upgrades, profile revisions, terminology changes, and pipeline modifications.

Produces evidence

Generates reproducible, timestamped proof for audits, release gates, vendor acceptance, and regulatory handovers.

Manages environments

Tracks validation state across dev, staging, and production. Compare signals across environments before promoting changes.

Why evidence matters

The core diagnostic question is not “Is our data valid?” but “Will we know when it stops being valid?”

Conformance at deployment time proves nothing about conformance tomorrow. At least seven distinct drift vectors can silently degrade data quality after initial validation:

Drift VectorExampleDetection Window
Terminology server updateCodeSystem version change alters ValueSet membershipsDays to weeks
IG/Profile revisionNew constraints added or cardinality changedRelease cycle
FHIR server upgradeHAPI v6.3→v6.4 changes validation behaviorImmediate
Mapping pipeline changeETL logic drift alters output structureHours to days
Environment config divergenceDev uses R4@1.4.0, Prod uses R4@1.5.0Silent
Data volume shiftEdge cases emerge at scale that never appeared in testingWeeks
Dependency chain updateTransitive profile dependency changes upstreamSilent

Without continuous validation, these vectors compound silently. Records makes each one detectable.

How it fits your infrastructure

Records adds an observation layer. It doesn't build a silo.

Records IS

  • A validation control plane
  • A drift detection engine
  • A release evidence surface
  • An environment comparison tool
  • A vendor-neutral observer
  • Adjacent to your existing stack

Records is NOT

  • A CDR or storage platform
  • An EHR or clinical application
  • A BI or analytics system
  • A compliance certification tool
  • A system of record
  • A decision-making authority
FHIR ServerExisting
GET
RecordsObserver
Validation
Compliance
Drift

Clear responsibility boundaries

Your FHIR server stores data. Records observes it.

Responsibility
FHIR Server / CDR
Records
Data storage
Data access control
Clinical workflows
Validation evidence
Drift detection
Release gates
Baseline management

Deterministic Signals

Every validation run produces one of three signals. Records produces signals — you decide and act.

PASS

All thresholds met

Operator: proceed with confidence

WARN

Non-critical thresholds breached

Operator: proceed with investigation

FAIL

Critical thresholds breached

Operator: investigate before proceeding

Determinism & Reproducibility

Same inputs produce same outputs. Evidence is comparable across time, environments, and teams.

This is the reproducibility contract — seven inputs must be identical for a validation run to be considered strictly comparable:

InputWhy it matters
FHIR endpoint URLIdentifies the data source
Profile set + versionDetermines validation rules
Terminology server stateResolves code bindings
Validator versionEngine behavior determinism
Run configurationThresholds, exclusions, scope
Environment labelIsolation and comparison context
TimestampPoint-in-time data snapshot

When to Use Records

Five concrete scenarios where Records produces the evidence you need.

Release Safety Gate

Validate before pushing an IG/Profile update, FHIR server upgrade, or mapping change to production.

TriggerIG/Profile update
EvidencePass/fail signal
DecisionSafe to release?
Quality gate showing pass/fail status for development environment

Drift & Regression Detection

After changes land, compare current validation state against your baseline to catch new errors immediately.

TriggerPost-upgrade run
EvidenceDelta vs baseline
DecisionRollback or investigate?
Validation comparison showing drift and regression detection between runs

Deep-Dive Investigation

Instantly drill down from high-level metrics to specific JSON resources. See the exact line causing the error with contextual highlighting.

TriggerValidation Issue
EvidenceField-level error context
DecisionRoot cause identified?
Issues view with selected resource for deep-dive investigation

Acceptance & Handover Evidence

When a vendor delivers or a system migrates, produce auditable proof of validation state.

TriggerVendor delivery
EvidenceEvidence snapshot
DecisionAccept delivery?
Validation runs showing selected run details

Multi-Server Comparability

Compare validation state across federated servers to identify alignment gaps before integration.

TriggerFederation / Multi-server
EvidenceSide-by-side validation
DecisionAlignment gaps?
Side-by-side server comparison view

The Operating Model

Records operates continuously, not episodically. Validation runs on every change, not quarterly audits.

The release-safety loop is the core operational cycle:

Baseline
Reference state
Run
Continuous validation
Delta
Drift detection
Alert
Signal produced
Triage
Owner assigned
Fix
Remediation applied
Re-baseline
New reference
↻ Continuous loop

Each step in the loop produces traceable evidence. Baselines establish known-good states. Runs produce signals. Deltas quantify change. Alerts notify. Triage assigns ownership. Fixes resolve. Re-baselining closes the loop and starts the next cycle. The operator controls every transition.

Technical Surface

What goes in, what comes out, and why it matters.

Inputs

  • FHIR Endpoint
    One or more server URLs — Records reads via GET/HEAD only
  • Profile Set + Version
    StructureDefinitions that define your conformance target
  • Terminology State
    CodeSystem/ValueSet bindings from your terminology server
  • Run Configuration
    Thresholds, exclusions, environment label, baseline reference

Outputs

  • Signals
    Deterministic PASS / WARN / FAIL per validation run
  • Deltas
    New errors, resolved warnings, and regressions vs. baseline
  • Evidence Metadata
    Run ID, timestamp, config fingerprint, environment, profile version
  • Conformance Score
    Percentage-based quality indicator per resource type and environment

See it on your own data.

We'll connect Records to your FHIR server and run a validation — live, in 30 minutes. No slides, no sandbox.