← Back to Blog
Research

Validating Synthetic Physician Data: The AMA Prior Authorization Study

How Simsurveys synthetic HCP data performed against the American Medical Association’s Prior Authorization Survey — 17 questions, 1,000 physicians, and a KL divergence of 0.039 on care delay questions.

Research · March 3, 2026 · Myles Friedman · 7 min read

Prior Authorization: A Critical Pain Point

Prior authorization — the process by which insurers require physicians to obtain approval before delivering certain treatments — has become one of the most contentious issues in American healthcare. Physicians consistently report that prior authorization requirements delay necessary care, increase administrative burden, and sometimes force patients to abandon recommended treatments entirely. The American Medical Association has tracked these dynamics for years through its Prior Authorization Survey, one of the most widely cited data sources on the topic.

For Simsurveys, the AMA survey presented an important validation opportunity. Physician surveys are expensive and difficult to field. Response rates among practicing physicians have declined steadily, and the costs of reaching specialists who provide 20 or more hours of direct patient care per week are substantial. If a synthetic HCP model can reliably reproduce the patterns found in authoritative physician research, it opens the door to faster, more affordable insights on healthcare policy and practice management questions.

The Benchmark

We compared Simsurveys synthetic physician responses against the AMA Prior Authorization Survey, which sampled 1,000 practicing physicians across the United States. Eligibility required that respondents provide at least 20 hours of direct patient care per week and personally complete prior authorization requests. The survey covered 17 questions organized around care delays, treatment abandonment, clinical outcomes, peer review processes, resource utilization, and payer-specific administrative burden.

We generated a matched synthetic sample using the Simsurveys Healthcare (HCP) model and computed KL divergence scores for each question to measure distributional alignment between real and simulated physician responses.

Where It Worked Well

The synthetic model performed strongly on the questions most central to the prior authorization debate. On Q1, which asked physicians about the frequency of care delays attributable to prior authorization, the KL divergence was just 0.039 — indicating near-identical response distributions between the AMA sample and our synthetic respondents. Q2, covering treatment abandonment due to prior authorization barriers, achieved a similarly tight 0.040.

The model also excelled on process-oriented questions. Q5, which asked about the frequency of peer-to-peer review calls with insurers, scored 0.021 — one of the lowest divergences in the study. Q6, addressing the perceived qualifications of insurer peer reviewers, came in at 0.048. These results suggest the model has internalized the professional norms and frustrations that practicing physicians express around the mechanics of authorization processes.

Perhaps most notable was the performance on payer-specific burden questions. When physicians were asked to rate the administrative burden imposed by individual insurers, the synthetic model closely matched real-world sentiment. Anthem scored a KL divergence of just 0.031, Aetna 0.019, and Humana 0.092. These are the kinds of granular, brand-level questions where synthetic data is often expected to struggle, and the model held up well.

Strongest results: Care delay (KL=0.039), treatment abandonment (0.040), peer review frequency (0.021), Aetna burden (0.019), and Anthem burden (0.031). The model reliably captured the core dynamics of the prior authorization experience as reported by practicing physicians.

Where It Struggled

Not every question produced tight alignment. Q3, which asked physicians about their perception of clinical outcomes related to prior authorization, showed a KL divergence of 0.602 — the highest in the study. Q4, on denial trends over time, scored 0.337. Q7, on resource utilization patterns, came in at 0.266. These questions tend to be more subjective and involve longer-term professional judgment rather than concrete, observable events.

On the payer-specific side, UnitedHealthcare burden scored 0.270 and BCBS 0.172. These larger divergences may reflect the fact that physician sentiment toward specific insurers varies significantly by region, specialty, and practice type — variables that are difficult to fully capture in a general-purpose HCP model without additional segmentation.

The pattern is instructive: the model performs best on factual, behavioral, and process-oriented questions and shows more variance on subjective perception and trend-judgment items. This is consistent with what we see across other validation studies and reflects a known characteristic of synthetic survey data.

What This Means

The AMA validation demonstrates that synthetic physician data can reliably replicate the response distributions of real physicians on a majority of prior authorization questions — particularly those related to care delays, treatment abandonment, insurer interactions, and payer-specific burden. For research teams studying prior authorization policy, practice management, or payer relations, the Simsurveys HCP model offers a viable path to fast, scalable physician insights without the cost and timeline of traditional physician panels.

Where the model showed weakness — subjective clinical outcome perceptions and denial trend judgments — researchers should exercise caution and consider supplementing synthetic data with qualitative interviews or targeted traditional fielding. This is not a limitation unique to synthetic data; even traditional physician surveys struggle with the reliability of subjective trend questions.

The full validation report, including question-level distribution tables and KL divergence scores, is available for download on our validation studies page. To explore the Healthcare (HCP) model further, visit the model page or create a free account to run your own physician study.

Run your own physician study.

Generate validated HCP responses in minutes. No panel partner, no recruitment wait.