A Landmark Primary Care Survey
The Commonwealth Fund/Kaiser Family Foundation 2015 National Survey of Primary Care Providers is one of the most detailed studies of primary care practice in the United States. Fielded at a pivotal moment — during the early implementation years of the Affordable Care Act — it captured the attitudes, practice patterns, and economic realities of primary care physicians across the country. The survey covered everything from practice satisfaction and team collaboration to payer mix, patient volumes, and opinions on how the ACA was reshaping demand for care.
For Simsurveys, this study represents an ideal validation benchmark for our Healthcare (HCP) model. It tests whether synthetic physician respondents can replicate real physicians' responses across a wide range of practice-related topics — not just clinical knowledge, but the operational and attitudinal dimensions that matter most in healthcare market research.
Study Design
The original survey included n=1,624 primary care physicians (MDs only), and our simulation mirrored this population exactly. We generated 1,624 synthetic physician respondents and compared their responses across more than 45 questions covering practice satisfaction, patient volumes, payer mix, ACA impact, care quality, and payment models.
A critical detail: no demographic targeting was applied in the simulation. The alignment between synthetic and live data reflects the emergent structure in the model — the Healthcare model's understanding of how primary care physicians think, practice, and respond to surveys, without being told the specific demographic composition of the sample.
Results: Exceptional on Practice Patterns and Attitudes
The strongest results came from the questions most central to primary care practice. Practice satisfaction overall achieved a KL Divergence of just 0.006 — effectively indistinguishable from the live data. Team collaboration scored 0.008. Accepting new patients: 0.007. The trend in chronically ill patients: 0.009.
The model's performance on ACA-related questions was particularly striking. ACA opinion achieved a KL Divergence of 0.000 — a perfect distributional match. ACA impact on practice demand: 0.003. These results suggest that the Healthcare model has internalized a remarkably accurate picture of how primary care physicians viewed the ACA's effects on their practices.
Near-perfect alignment on core practice questions. Practice satisfaction (KL 0.006), team collaboration (0.008), accepting new patients (0.007), ACA opinion (0.000), and Medicaid acceptance (0.004) — all achieved without demographic targeting.
Other strong results included Medicaid acceptance (0.004), Medicaid payment bump awareness (0.006), and practice location (0.004). The model accurately reproduced the distribution of urban, suburban, and rural practice settings without any geographic targeting.
Where the Model Showed Limitations
The model performed well but not perfectly on payer mix questions. Medicare payer mix achieved a KL Divergence of 0.101, and elderly patients scored 0.140 — both in the "Good" range but not at the exceptional level seen on practice pattern questions.
The weaker results appeared on questions about rare or edge-case populations. Uninsured payer mix showed a KL Divergence of 0.203, and undocumented immigrants scored 0.116. These are topics where precise percentage estimates vary significantly by practice setting, geography, and local policy — and where a model without specific demographic anchoring would be expected to show more variance.
The pattern is clear and consistent: the Healthcare model excels on practice patterns, professional attitudes, care quality perceptions, and ACA-related opinions. It is weaker on precise numerical estimates of payer mix for rare or highly variable populations. This is a sensible capability boundary — the former reflects stable professional norms and widely shared physician experiences, while the latter depends on hyper-local factors that vary dramatically across practices.
Emergent Demographic Alignment
One of the most notable findings from this validation is the demographic alignment achieved without targeting. The model generated a physician sample whose practice locations, patient volumes, and specialty mix closely matched the original survey — purely from the model's internal representation of what a primary care physician population looks like. This emergent alignment is a strong signal that the Healthcare model has captured the structural characteristics of the US primary care workforce, not just surface-level survey response patterns.
Implications for HCP Research
These results have direct implications for healthcare market research teams. For studies focused on physician attitudes, practice patterns, care quality perceptions, and policy opinions, the Simsurveys Healthcare model delivers results that are statistically close to live physician panels. For studies requiring precise payer mix estimates or rare-population prevalence data, the model should be used as a directional tool with the understanding that local variation may not be fully captured.
The full validation report, including question-level distribution tables and metric summaries, is available for download. For more on the Healthcare model, visit the Healthcare model page.