A voluntary sample (also called a voluntary response sample) is a type of survey sample where participants choose to take part on their own rather than being selected by researchers. People decide for themselves whether to respond, which means the sample is shaped by who shows up, not by any random selection process. This makes it one of the most common, and most problematic, methods of collecting data.
How Voluntary Sampling Works
In a voluntary response sample, a survey or poll is made available to a broad group of people, and anyone who wants to participate simply opts in. There’s no mechanism ensuring that every person in the population has a known chance of being selected. Instead, individuals self-select based on their own interest, availability, or motivation. Online polls, call-in surveys, product review platforms, and questionnaire links shared on social media all rely on this approach.
The concept is straightforward: put the opportunity out there and see who responds. A classic example comes from advice columnist Ann Landers, who once asked her readers, “If you had it to do over again, would you have children?” The responses flooded in, but only from readers who felt strongly enough to write. Similarly, author Shere Hite sent questionnaires to 100,000 women for her book Women and Love, but the findings reflected only the subset who chose to fill them out. ESPN’s online polls work the same way: anyone browsing the site can vote, but the results capture the opinions of sports fans who happened to visit that page, not the general public.
Why Voluntary Samples Are Biased
The central problem with voluntary samples is self-selection bias. People who volunteer for a survey tend to be systematically different from people who don’t, and those differences skew the results. Participants in voluntary samples usually choose to respond because they have strong opinions about the subject. Someone who is furious about a product is far more likely to leave a review than someone who found it perfectly fine. Someone passionate about a political issue is more likely to call in to a radio poll than someone who feels indifferent.
This bias runs deeper than just strong opinions. Research on volunteer behavior has found that personality traits play a measurable role in who opts in. In studies involving physically or psychologically demanding procedures, volunteers scored significantly higher on sensation-seeking traits than non-volunteers. Males were also more likely to volunteer than females, though much of that gender difference was explained by differences in sensation-seeking itself. When sensation-seeking was included in statistical models, the effect of gender on volunteering dropped by 81%.
The result is a sample that overrepresents certain types of people and underrepresents others. If the trait that drives someone to volunteer also relates to the thing being studied, the findings can be seriously distorted. This can lead to overestimating or underestimating what’s true for the broader population, and in some cases, it produces entirely false conclusions by making real differences invisible or creating apparent differences that don’t actually exist.
Voluntary vs. Random Sampling
The gold standard in survey research is probability sampling (often called random sampling), where every person in the target population has a known, positive chance of being selected. Researchers control who gets invited, and that randomness is what allows them to generalize findings to the larger population with measurable precision. Even when response rates are low, probability samples have consistently produced estimates with relatively small biases.
Voluntary sampling is a form of nonprobability sampling, meaning selection probabilities aren’t known or controlled. The participant chooses to be in the sample rather than being chosen by a random mechanism. This distinction matters enormously for the reliability of the results. With probability sampling, researchers can calculate margins of error and confidence intervals. With voluntary sampling, those calculations don’t hold up because the sample doesn’t represent the population in any statistically guaranteed way.
The tradeoff is practical. Voluntary samples are cheap and fast. Posting a survey link costs almost nothing compared to designing a probability-based study with proper sampling frames, contact protocols, and follow-up procedures. That’s why voluntary samples are everywhere online. But as a federal research brief from the Office of Planning, Research, and Evaluation notes, these nonprobability samples raise significant risks associated with data quality, and many studies find them to be less accurate than their probability-based counterparts.
How Bias Limits What You Can Conclude
The biggest casualty of voluntary sampling is external validity, which is the ability to apply findings beyond the people who actually responded. When certain groups are overrepresented and others are missing entirely, the results describe the sample but not the population. Offering a survey in only one format (online only, for example) can exclude entire demographic groups. About 5% of U.S. adults don’t use the internet at all, and 20% lack broadband access at home, according to Pew Research. Their chance of appearing in an online voluntary poll is zero.
This isn’t just a theoretical concern. Research has shown that providing a survey in a single format results in lower participation rates among certain sociodemographic groups, compounding the self-selection problem. The people who respond end up being younger, more educated, more digitally connected, or more opinionated than the population you’re trying to understand. And because the selection process is essentially a black box, it’s difficult to even identify all the sources of bias, let alone correct for them.
Can You Fix a Voluntary Sample?
Researchers sometimes try to compensate for voluntary response bias using statistical weighting. The idea is to rebalance the composition of the sample after the fact, giving more weight to underrepresented groups and less weight to overrepresented ones. This requires auxiliary data, information about the full population that lets you identify where the sample falls short. Large panel surveys like the Health and Retirement Study produce separate weights adjusting for nonresponse, which analysts can incorporate into their work.
Weighting can help, but it has limits. The adjustments produce roughly unbiased estimates only when their underlying assumptions about who’s missing and why are correct. In voluntary and opt-in samples, those assumptions are harder to verify and more likely to be wrong than in probability samples. If the reason people didn’t respond is related to the very thing the survey measures (and it often is), no amount of reweighting fully solves the problem. The adjustments improve accuracy in some cases but can’t transform a fundamentally flawed sample into a representative one.
When Voluntary Samples Still Get Used
Despite their limitations, voluntary samples aren’t going away. They’re useful for exploratory research, where the goal is to generate ideas or identify patterns rather than make precise population estimates. They work for gathering feedback in low-stakes contexts, like a restaurant asking diners to rate their experience. They can also serve as a starting point before investing in more rigorous methods.
The key is recognizing what a voluntary sample can and can’t tell you. It can reveal that some people feel a certain way. It cannot tell you how many people feel that way, or whether that feeling is common or rare in the broader population. Whenever you see poll results from a website, a call-in show, or an open survey link, you’re looking at a voluntary sample. The numbers may look precise, but they reflect the people who chose to show up, not the people who didn’t.

