Skip to main content

It’s Time to Be Honest About the Limitations of Nonprobability Survey Panels

Expert View
aerial image of clusters of people scattered across different points in a grid

Author

David Dutwin

Executive Director and Senior Vice President, AmeriSpeak

October 2025

Accurate subgroup data is not a technical detail—it’s essential for meaningful, actionable survey findings.

In recent years, the widespread adoption of nonprobability sampling methods has prompted both innovation and concern within the survey research community. While these approaches often produce estimates that align closely with benchmarks at the national level, their performance becomes less stable when examined across specific demographic subgroups.

This issue has come into sharper focus with emerging critiques of how certain subgroups, particularly Hispanic respondents and adults under 30, are represented in nonprobability panels. A Pew Research Center evaluation of nine opt-in online panels found particularly large biases in estimates for Hispanic adults, Black adults, and young adults (18–29), with errors in some groups reaching as high as 15 percentage points.

These biases are not just academic. They have real-world consequences when certain populations are central to the research question or otherwise critical to overall insights desired from the data.

“These biases are not just academic. They have real-world consequences.”

Executive Director and Senior Vice President, AmeriSpeak

“These biases are not just academic. They have real-world consequences.”

These widespread biases in nonprobability samples underscore a fundamental truth in survey science: representativeness must extend beyond the aggregate. A dataset that aligns with national distributions but misrepresents key cohorts risks producing misleading or incomplete insights.

Recent methodological analyses show that nonprobability panels tend to struggle to adequately represent certain subgroups, even when adjusted with techniques like weighting or machine learning. One study from Cornell University suggests that these methods frequently fall short in addressing selection bias, particularly when it comes to minority groups and younger individuals, who are often underrepresented in such samples.

If we are to take the responsibility of empirical research seriously, given its power to inform policy and shape public understanding, then representativeness cannot stop at the topline. Every crosstab, every demographic subgroup, must bear the weight of that promise. Anything less risks introducing blind spots into the data, gaps that limit what researchers, decision-makers, and institutions can reliably conclude. More so, it erodes trust in our results and even our industry writ large.

This is not a technical detail—it is a defining challenge for our field. Meeting it requires sampling designs that are attentive to population coverage from the outset, methods that support valid inferences across subgroups, and an unwavering commitment to scientific rigor. It also demands infrastructure built not just to produce data quickly but reliably and responsibly.

Panels like AmeriSpeak®, grounded in probability-based sampling and designed to accurately reflect the full structure of the U.S. population, are one way to meet that challenge. But the larger imperative is clear: we must all push toward methods that earn our confidence not just in the aggregate, but in every corner and every cohort our work seeks to represent.

As researchers, we must continue to examine not only how well our data reflect the general population, but also how faithfully they capture the experiences and perspectives within it. Robust subgroup representation is not a secondary concern. It is central to the validity and utility of the conclusions we draw.

“As researchers, we must continue to examine not only how well our data reflect the general population, but also how faithfully they capture the experiences and perspectives within it.”

Executive Director and Senior Vice President, AmeriSpeak

“As researchers, we must continue to examine not only how well our data reflect the general population, but also how faithfully they capture the experiences and perspectives within it.”

There is reason for optimism. As our field continues to reckon with the limitations of nonprobability methods, researchers are increasingly recognizing the importance of building designs that support both national and subgroup-level validity. Investments in rigorous probability-based approaches, like those underpinning AmeriSpeak, offer a path forward.

By maintaining a commitment to scientific standards while adapting to new challenges in recruitment and engagement, we can continue to generate insights that are both accurate and empirically sound. With care, transparency, and methodological integrity, we can meet the moment and ensure that every corner and every cohort is represented in the evidence we use to understand our world.

Let’s raise the standard together. Explore how probability-based panels like AmeriSpeak help us build data that truly represents the full spectrum of the U.S. population.



Suggested Citation

Dutwin, D. (2025, October 7). It’s Time to Be Honest About the Limitations of Nonprobability Survey Panels. [Web blog post]. NORC at the University of Chicago. Retrieved from www.norc.org.


Tags

Research Divisions

Departments, Centers & Programs



Solutions

Experts

Explore NORC Research Science Projects

Analyzing Parent Narratives to Create Parent Gauge™

Helping Head Start build a tool to assess parent, family, and community engagement

Client:

National Head Start Association, Ford Foundation, Rainin Foundation, Region V Head Start Association

America in One Room

A “deliberative polling” experiment to bridge American partisanship

Client:

Stanford University