How to Select Participants for a Research Study

Selecting participants for research involves defining who qualifies, deciding how many you need, and choosing a method to recruit them that minimizes bias. Every choice you make at this stage shapes whether your findings will be credible and applicable beyond your study. Here’s how to approach each step.

Define Your Target Population First

Before you think about recruitment, get clear on exactly who you’re studying. Your target population is the full group of people your research question applies to. If you’re studying how sleep affects memory in college students, your target population is college students, not the general public. Every decision that follows flows from this definition.

From your target population, you’ll draw a sample, a smaller group that actually participates. The goal is for that sample to represent the larger population well enough that your results are meaningful beyond the people in your study.

Set Inclusion and Exclusion Criteria

Inclusion criteria describe the key features someone must have to participate. These should connect directly to your research question. If you’re studying a treatment for knee osteoarthritis, your inclusion criteria might specify adults over 50 with a confirmed diagnosis and a certain level of pain severity. If the characteristic doesn’t help answer your research question, it probably doesn’t belong in your inclusion list.

Exclusion criteria identify people who technically meet your inclusion criteria but carry additional characteristics that could compromise the study. Common reasons for exclusion include conditions that would make someone likely to drop out, comorbidities that could confuse results, or factors that increase risk of harm during an intervention. For example, you might exclude patients taking a medication known to interact with your study drug.

A few common mistakes to avoid: don’t use the same variable for both inclusion and exclusion (listing “men only” as inclusion and “being female” as exclusion is redundant), and don’t add criteria unrelated to your research question. Every criterion you add narrows your pool and potentially limits how broadly your results apply. If you exclude patients with other health conditions, your findings may not generalize to the many real-world patients who have them.

Choose a Sampling Method

Your sampling method determines how you pick individuals from your target population. The two broad categories are probability sampling (where everyone has a known chance of being selected) and non-probability sampling (where they don’t).

Probability Sampling

Simple random sampling works when you have a complete list of everyone in your target population, called a sampling frame. You select participants at random from that list, either through a lottery system or a computer-generated random sequence. This gives every person an equal chance of being chosen, which is the gold standard for representativeness.

Stratified random sampling also requires a complete sampling frame but adds a layer of precision. You divide the population into subgroups (strata) based on characteristics like age, sex, or disease severity, then randomly sample from each subgroup separately. This is especially useful when you want to ensure underrepresented groups appear in your sample in adequate numbers. With simple random sampling alone, small subgroups tend to stay underrepresented.

Cluster sampling is the practical choice when your population is too large or spread out to list every individual. You divide the population into clusters, often by geographic area, randomly select some clusters, then randomly select individuals within those chosen clusters. This two-stage process makes large-scale studies feasible, such as surveying school children across an entire country, where listing every student would be impractical.

Non-Probability Sampling

In many real-world situations, you won’t have a neat list of your entire population. Convenience sampling draws from whoever is available and willing. It’s fast and inexpensive but introduces bias because the people easiest to reach may differ systematically from those who aren’t. Purposive sampling deliberately selects people who meet specific characteristics, which is common in qualitative research where depth matters more than representativeness. Snowball sampling asks enrolled participants to refer others, which is particularly useful for hard-to-reach populations like people with rare diseases or stigmatized conditions.

Non-probability methods are widely used and sometimes the only option, but they require you to be honest in your paper about the limitations they introduce.

Calculate Your Sample Size

Recruiting too few participants means your study may not detect a real effect. Recruiting too many wastes resources and exposes more people to research procedures than necessary. A power analysis helps you find the right number, and it requires four inputs.

The first is your significance level (alpha), typically set at 0.05, meaning you accept a 5% chance of detecting a difference that doesn’t actually exist. The second is statistical power, usually set at 80% or higher, representing the probability that you’ll detect a real effect if one exists. At 80% power, you accept that one in five times you might miss a genuine difference.

The third input is effect size, the smallest difference between groups that you consider meaningful. You can estimate this from pilot studies, previously published research, or an educated guess based on clinical experience. Effect size has a huge influence on sample size: the smaller the difference you’re trying to detect, the more participants you need. In fact, sample size is inversely proportional to the square of the effect size, so even a small change in your expected difference dramatically shifts the number you need. The fourth input is population variance, estimated through the standard deviation of your outcome measure.

Free software tools and online calculators can run these calculations once you have your four inputs. If you’re unsure about your estimates, a statistician can help you model different scenarios.

Plan Your Recruitment Strategy

Knowing who you want and how many you need is only useful if you can actually reach them. Recruitment is often the most time-consuming part of a study, and the method you choose affects both cost and the type of participant you attract.

A large observational study published in the Journal of Medical Internet Research tracked the cost of different recruitment methods. Recontacting people from a previous survey was by far the cheapest approach at roughly $0.46 per participant, though it only works if you have an existing database. Social media advertising through platforms like Meta cost about $18.39 per recruit, with a response rate of approximately 0.05% of people who saw the ad, but it contributed 31% of total participants, making it the single largest source. Television advertising was the most expensive at nearly $42 per participant.

For smaller studies, posting flyers in clinics, partnering with patient advocacy groups, or working with clinical registries can be effective. The key is matching your channel to where your target population actually is. If you’re studying older adults with a chronic condition, a partnership with specialty clinics will outperform Instagram ads.

Build Diversity Into Your Design

Research findings are only useful to the extent they apply to the people who need them. Historically, many studies enrolled mostly white men, producing results that didn’t translate well to women or minority populations. Federal policy now requires that all NIH-funded clinical research include women and members of minority groups unless there is a clear scientific justification for exclusion. Cost is explicitly not an acceptable reason to exclude these groups.

Even if your study isn’t federally funded, building demographic diversity into your sample strengthens your work. Your research plan should describe the expected composition of your study population by sex, gender, and racial or ethnic group, and explain why that composition makes sense for your research question. Stratified sampling is one practical tool for ensuring adequate representation across subgroups.

Screen Participants Carefully

Once potential participants express interest, you need a systematic way to verify they meet your criteria before enrolling them. A screening questionnaire is the standard approach. It should cover every inclusion and exclusion criterion without being so long that it drives people away.

Design your screening tool with clear, simple language, and test it with a few members of your target population before using it widely. If your questionnaire has multiple sections, use filtering questions to guide respondents past sections that don’t apply to them. For example: “Have you been diagnosed with diabetes? If no, skip to question 12.” But don’t overdo filtering, as too many branching paths confuse people and increase errors.

Address Compensation Without Creating Pressure

Paying participants for their time is standard and ethical, but the amount matters. International ethical guidelines state that compensation should cover direct costs like travel and reimburse participants for their time and inconvenience, using the local minimum hourly wage as a reference point.

Compensation should not be tied to the level of risk involved. Its purpose is to acknowledge the participant’s time, not to make a risky study feel worth it financially. If the payment is so large that people consent against their better judgment, that crosses the line into what ethics boards call “undue inducement.” Your institution’s research ethics committee will evaluate whether the compensation you propose is appropriate for the cultural and economic context of your study population. Non-monetary compensation, such as gift cards, transportation vouchers, or health screenings, can also be appropriate.

Minimize Selection Bias

Even with a solid plan, bias can creep into participant selection. Volunteer bias occurs because people who sign up for research tend to differ from those who don’t. They may be healthier, more educated, or more motivated, which skews your results. You can’t eliminate this entirely, but you can reduce it by using rigorous eligibility criteria and ensuring all participants come from the same general population.

Non-response bias is a related problem: if the people who decline participation differ systematically from those who accept, your sample won’t reflect your target population. Tracking basic demographics of non-responders, when possible, lets you assess whether this is distorting your results.

Attrition bias emerges when participants drop out mid-study in non-random patterns. Plan for this before you start. Offering flexible scheduling, maintaining regular contact through phone or email, and making study visits as convenient as possible all help retain participants. Some researchers budget for a higher initial enrollment to account for expected dropout. Prospective study designs, where the outcome is unknown at the time of enrollment, are inherently less prone to selection bias than retrospective approaches, because neither the researcher nor the participant can be influenced by knowledge of the result.