MedVertical
Records

Continuous validation. Drift detection. Release evidence.

Records deploys adjacent to your FHIR server and answers the question your CDR can't: “Is this data still conformant?”

Quality Gates overview — production environment showing 98% pass with threshold bar

Release Safety Gate

Validate before pushing an IG/Profile update, FHIR server upgrade, or mapping change to production.

What Records does

Four things Records does for your FHIR infrastructure.

Validates continuously

Runs profiles, terminologies, and custom rules against live FHIR data. Every run produces a deterministic PASS / WARN / FAIL signal.

Detects drift

Compares each validation run against baselines. Surfaces regressions from server upgrades, profile revisions, terminology changes, and pipeline modifications.

Produces evidence

Generates reproducible, timestamped proof for release gates, regulatory submissions, compliance audits, and supplier acceptance.

Manages environments

Tracks validation state across dev, staging, and production. Compare signals across environments before promoting changes.

Deterministic Signals

Every validation run produces one of three signals. Records produces signals — you decide and act.

PASS

All thresholds met

Operator: proceed with confidence

WARN

Non-critical thresholds breached

Operator: proceed with investigation

FAIL

Critical thresholds breached

Operator: investigate before proceeding

The reproducibility contract

These fields must be identical for a validation run to be considered strictly comparable. Given the same contract, Records produces the same results — today, next month, or during an audit three years later.

InputWhy it matters
Validator tool + versionEngine identity and behavior determinism
Validator configurationStrictness settings, enabled aspects, scope
IG packages + canonical pinningProfile set, versions, and canonical URL resolution locked via .records-lock.json
Terminology source + snapshotResolves code bindings at a fixed point in time
Environment labelIsolation and comparison context
Thresholds appliedPass/warn/fail criteria per aspect
Run timestampPoint-in-time data snapshot
SHA-256 content hashTamper detection on every evidence report

Canonical pinning eliminates a common source of silent validation drift: when multiple installed IG packages define the same canonical URL, the resolution order can change between installs. Records generates a .records-lock.json lockfile at install time that pins every canonical URL to a single, deterministic source. Same packages + same lockfile = same results.

Why evidence matters

The core diagnostic question is not “Is our data valid?” but “Will we know when it stops being valid?”

Conformance at deployment time proves nothing about conformance tomorrow. At least seven distinct drift vectors can silently degrade data quality after initial validation:

Drift VectorExample
Terminology server updateCodeSystem version change alters ValueSet memberships
IG/Profile revisionNew constraints added or cardinality changed
FHIR server upgradeHAPI v6.3→v6.4 changes validation behavior
Mapping pipeline changeETL logic drift alters output structure
Environment config divergenceDev uses R4@1.4.0, Prod uses R4@1.5.0
Data volume shiftEdge cases emerge at scale that never appeared in testing
Dependency chain updateTransitive profile dependency changes upstream

Without continuous validation, these vectors compound silently. Records makes each one detectable.

How it fits your infrastructure

Records adds an observation layer. It doesn't build a silo.

Records IS

  • A validation control plane
  • A drift detection engine
  • A release evidence surface
  • An environment comparison tool
  • A vendor-neutral observer — independent TypeScript engine, no HAPI dependency in production
  • Adjacent to your existing stack

Records is NOT

  • A CDR or storage platform
  • An EHR or clinical application
  • A BI or analytics system
  • A compliance certification tool
  • A system of record
  • A decision-making authority
FHIR ServerExisting
GET
RecordsObserver
Validation
Compliance
Drift

Clear responsibility boundaries

Responsibility
FHIR Server / CDR
Records
Data storage
Data access control
Clinical workflows
Validation evidence
Drift detection
Release gates
Baseline management

The engine

Pure TypeScript. No JVM. Built for speed, accuracy, and determinism.

Performance

~5 ms

Median validation time

CPU-only, no I/O

100%

Recall

244/244 defects detected

8

Validation aspects

Incl. anomaly detection

~175/s

Engine throughput

CPU-only, no I/O (M1 Max)

Validation quality

100%Recall

244 of 244 known defects detected. Zero false negatives across 16 defect categories and 12 resource types.

99.5%Precision

Near-zero false positives. 100% precision on profiled data (ISiK, MII, UK Core). 99.5% on HL7 base examples.

Measured against a 610-fixture test corpus (244 defect + 187 clean + 179 profiled) covering Synthea, MII, ISiK, UK Core, and customer-reported edge cases.

8 validation aspects

StructuralJSON/XML schema conformance, required fields, data types
ProfileStructureDefinition constraints, must-support, cardinality
TerminologyCode bindings, ValueSet membership, CodeSystem validity
ReferenceReference targets exist and resolve to correct types
InvariantFHIRPath constraints defined in profiles and base spec
Custom RulesProject-specific FHIRPath rules beyond standard profiles
MetadataResource metadata consistency, version, lastUpdated
AnomalyCross-resource statistical outliers, missing patterns, distribution skew, PII detection (DE: KVNR, Steuer-ID, IKNR, IBAN · US: SSN)

Custom Rules & Advisor Rules

Define project-specific validation rules in FHIRPath — beyond what standard profiles cover. The built-in editor offers autocomplete for 40+ FHIRPath functions, interactive testing against sample resources, batch testing, templates for common patterns, rule versioning with rollback, and per-rule execution statistics. 15 example rules ship out of the box. Advisor Rules let you suppress, reclassify, or annotate validation issues post-validation — with import support for gematik Referenzvalidator YAML and Firely Quality Control formats.

What makes it different

Capabilities that go beyond single-resource validation.

Cross-resource anomaly detection

"90% of your Observations have effectiveDateTime. These 100 don't." 8 statistical detectors analyze patterns across your full dataset — no single-resource validator can do this.

Fix suggestions for every error

265 of 265 emitted error codes have actionable remediation steps with patch-style suggestions. Most validators cover 10-30%.

Pure TypeScript — no JVM

No Java, no .NET runtime. Instant startup. Runs anywhere Node.js runs — CI runners, Docker containers, edge environments.

Two-phase terminology

ValueSets expanded and cached locally at install time. Code lookups resolve in <1ms — not 50-200ms per round-trip to a remote terminology server.

Deterministic by design

Same inputs always produce the same outputs. The reproducibility contract pins every variable. Evidence is comparable across time and environments.

MCP server for AI agents

Built-in Model Context Protocol server exposes validation, issue explanation, quality scoring, and run comparison to LLM agents like Claude and Cursor.

Records vs. standalone FHIR validators

CapabilityStandalone validatorsRecords
Single-resource validation
Dataset-level validationresource-by-resource onlyfull dataset against all profiles
Cross-resource anomaly detection8 statistical detectors
Fix suggestionsPartial (~10-30% of errors)100% of errors (265/265)
Cold start10-20s (JVM / .NET)~1.3s (Node.js)
Warm validationVariable~485ms per resource
Evidence reportspass/fail per resource5 report types with SHA-256 integrity
Baseline & drift detectionContinuous delta comparison
Regression trackingFull history with per-issue lifecycle
Runtime dependencyJVM or .NETNode.js only — no JVM

Standalone validators validate individual resources on demand. Records adds continuous monitoring, dataset-level analysis, and reproducible evidence on top of validation.

The Operating Model

Records operates continuously, not episodically. Validation runs on every change, not quarterly audits.

The release-safety loop is the core operational cycle:

Baseline
Reference state
Run
Continuous validation
Delta
Drift detection
Alert
Signal produced
Triage
Owner assigned
Fix
Remediation applied
Re-baseline
New reference
↻ Continuous loop

Each step in the loop produces traceable evidence. Baselines establish known-good states. Runs produce signals. Deltas quantify change. Alerts notify. Triage assigns ownership. Fixes resolve. Re-baselining closes the loop and starts the next cycle. The operator controls every transition.

Issue lifecycle

Validation issues are not disposable alerts. Each issue is a persistent triage item with ownership, status, and an audit trail. Closure requires a passing verification run — you cannot mark an issue as resolved without evidence.

Open
Acknowledged
In Progress
Verified
Closed

5 evidence report types

Run Report

Complete validation results for a single run — scores, issues, aspect breakdown.

Baseline Report

Snapshot of a known-good state. The reference point for all future comparisons.

Delta Report

What changed between two runs — new issues, resolved issues, score drift.

Release Report

Go/no-go evidence for a release gate. Aggregates runs, baselines, and deltas.

Dataset Quality Report

Cross-resource quality analysis across an entire FHIR dataset.

Every report carries a SHA-256 content hash for tamper detection. ID redaction replaces resource identifiers with deterministic hashes for safe external sharing with auditors and regulators.

See it on your own data.

We'll connect Records to your FHIR server and run a validation — live, in 30 minutes. No slides, no sandbox.