HEOR · Market Access · Payer Evidence

One protocol. Every payer's version of the evidence.

Burden of illness, treatment patterns, real-world outcomes — authored once, re-parameterised per geography, population and comparator. Results an HTA body can audit and re-run.

HEOR
Reproducible economic evidence, not one-off studies
MARKET ACCESS
Payer-specific slices without new SOWs
HTA SUBMISSIONS
Auditable methods body re-runs, not just reads
PAYER EVIDENCE
Living dossier, refreshed as the data moves
01 · The shift

Every payer wants the evidence cut a different way. Today each cut is a new project.

THE OLD CADENCE
  • Submission one: CRO scoped, SOW signed, analysis locked.
  • Submission two: different geography, different comparator, different vendor — start over.
  • HTA reviewer asks for a subgroup. Another 3–6 months.
  • Each dossier reads differently. Each defence reads differently. Credibility takes the hit.
WITH UNISON
  • Author one protocol against OMOP concepts.
  • Re-parameterise per geography, population or comparator — the query is the same shape.
  • Federate across connected biobanks and national datasets. Aggregate-only returns.
  • HTA body gets the methods as a replayable artefact. Subgroup asks become re-runs, not re-studies.
02 · What HEOR & Access can run

Five workflows, one query surface.

BURDEN OF ILLNESS
Quantify the problem the product solves
Prevalence, incidence, comorbidity patterns, HCRU, costs by setting. The evidence base that sits under every economic model.
OUTPUT
Epidemiology tables · HCRU · cost drivers by setting
TREATMENT PATTERNS
Real-world use vs label and vs guideline
Lines of therapy, switch rates, adherence, discontinuation, concomitant use. What payers actually pay for — not what the trial assumed.
OUTPUT
Line-of-therapy tables · persistence curves · switch patterns
REAL-WORLD OUTCOMES
Effectiveness and costs outside the trial
Pre-specified, matched comparisons. Time-to-discontinuation, hospitalisations, outcomes proxies. The same protocol re-runs per payer population.
OUTPUT
KM curves · PS-matched comparisons · cost offsets
INDIRECT COMPARISONS
External controls & anchored comparisons
Build comparator cohorts from real-world data to anchor indirect comparisons where head-to-head trials do not exist. Pre-specified, defensible, reproducible.
OUTPUT
External-control cohort · matched comparison · methods artefact
HTA & REIMBURSEMENT
Country-by-country dossiers from one source of truth
Re-parameterise a single protocol for each HTA body. Keep the methods consistent; let the population and comparator vary. When a reviewer asks, the re-run is a click.
OUTPUT
Dossier-ready analysis · reviewer-auditable methods · replay artefact
03 · One protocol, many dossiers

Author once. Re-parameterise per country, payer and population.

01
Author
Pre-specified protocol against OMOP concepts. Population, exposure, outcome, time horizon, costs.
02
Parameterise
Swap geography, comparator, subgroup or time window without rewriting the analysis.
03
Federate
UQL fans out across connected biobanks and national datasets. Data never leaves the custodian.
04
Assemble
Results drop into dossier-ready tables and figures. Consistent across every submission.
05
Defend
Every number backed by a replayable UQL artefact. HTA reviewer re-runs, not just reads.
A DAY IN MARKET ACCESS

"What does real-world persistence look like in the reimbursed population?"

A payer evidence lead needs a persistence picture for three markets before the next submission cycle. One protocol, three parameterisations, federated in parallel.

Pre-specified
protocol locked before execution
Parameterised
3 countries · 2 subgroups
Federated
aggregate-only, custodian-controlled
Defensible
HTA-auditable UQL artefact
# unison · heor workspace
> "12m persistence · reimbursed cohort · 3 markets · vs SoC comparator."
→ Template: persistence · compiled to UQL
→ Federated across 3 datasets · aggregate-only
Market A · n = 18,420
12m persistence 58.2% (95% CI 57.5–58.9)
Market B · n = 11,905
12m persistence 52.7% (95% CI 51.8–53.6)
Market C · n = 7,216
12m persistence 61.4% (95% CI 60.2–62.5)
# replayable artefact · uql://query/hta-82c4
04 · Built for HTA scrutiny

Aligned to how HTA bodies and joint clinical assessments expect evidence to arrive.

Reproducible by construction
Every dossier number is backed by a replayable UQL artefact. A reviewer can re-run the analysis on the same data, the same day.
Consistent across submissions
Methods stay identical across geographies and populations. Only the parameters change, not the analytic spec.
Auditable at every step
Every mapping decision, every query execution, every result is logged and attributable — from the UQL artefact up.
Standards & fit:· OMOP CDM-native· Cyber Essentials Plus· CFR 21 Part 11-ready· EHDS-aligned· EU JCA-ready· ISPOR good-practice aligned
05 · What changes for the function

From "each submission is a project" to "each submission is a parameter."

COST PER DOSSIER
new SOW → new parameter
One protocol, many submissions. Marginal cost of the next dossier falls.
REVIEWER DEFENSIBILITY
report → replayable artefact
Subgroup asks become re-runs, not new studies. Credibility compounds.
EVIDENCE SHELF-LIFE
snapshot → living dossier
Refresh when the data moves, not when the contract renews.
Scope a pilot with your HEOR team in 2 weeks

Make every submission a parameter, not a project.