What Is Response Rate in Research: How It’s Calculated

Response rate is the percentage of people who complete a survey or questionnaire out of the total number who were asked to participate. If you send a survey to 1,000 people and 300 return usable answers, your response rate is 30%. It’s one of the most commonly reported metrics in survey research because it signals how much of your target population actually contributed data.

How Response Rate Is Calculated

The basic formula is straightforward: divide the number of completed responses by the number of eligible participants in your sample, then multiply by 100 to get a percentage. In practice, though, researchers handle the denominator differently depending on the survey. The U.S. Bureau of Labor Statistics uses several variations across its programs. Some divide responding units by all eligible units. Others add cases of “unknown eligibility” into the denominator, which produces a more conservative (lower) rate.

The tricky part is defining who counts as “eligible.” Some people on a mailing list may have moved, died, or never actually qualified for the study. The U.S. Census Bureau defines unknown-eligibility cases as sample units where there’s no information about whether the person is even part of the target population. When a survey has many of these cases, researchers estimate what proportion are likely eligible and adjust the formula accordingly. A survey that simply divides completions by the total number of invitations sent will almost always report a lower response rate than one that removes known ineligible cases from the denominator first.

This distinction matters when you’re comparing response rates across studies. Two researchers can survey the same population, get the same number of responses, and report different response rates depending on how they handle ineligible and unreachable participants.

Typical Response Rates by Survey Mode

Response rates vary dramatically depending on how a survey reaches people. Data from the HCAHPS hospital patient survey, covering patients discharged between July 2022 and June 2023, illustrates the pattern clearly:

  • Mail only: 22% average, with top-performing hospitals reaching about 32%
  • Telephone only: 27% average, top performers around 39%
  • Mixed mode (mail plus phone follow-up): 32% average, top performers around 43%

These numbers reflect a well-resourced, standardized federal survey program. Many academic or market research surveys see rates in similar or lower ranges. The general trend holds across contexts: combining multiple contact methods outperforms any single channel, and phone tends to edge out mail alone.

Why a Low Response Rate Isn’t Always a Problem

There’s a common assumption that higher response rates automatically mean better, more trustworthy data. The logic seems obvious: if only a small fraction of people respond, maybe the responders are systematically different from the non-responders, skewing your results. This concern is called non-response bias.

But the relationship between response rate and actual bias is weaker than most people think. A widely cited analysis by survey methodologist Robert Groves found no meaningful correlation between response rates and the degree of non-response bias in a survey’s results. Even within a single survey with a single response rate, the amount of bias varies significantly from one question to another. Multiple studies have demonstrated that surveys using aggressive follow-up protocols to push response rates higher don’t end up with meaningfully different estimates than the same surveys run with lighter protocols and lower response rates. The extra effort produces a higher number on paper without reducing bias in practice.

This doesn’t mean response rates are irrelevant. A 5% response rate should raise questions. But a 35% rate isn’t inherently less valid than a 65% rate. What matters more is whether the people who responded differ from those who didn’t on the specific variables you’re measuring.

What Drives Response Rates Up or Down

Survey Length

Longer surveys lose people. A study published in the Journal of Clinical and Translational Science tested three versions of the same survey: a 13-question version (about 2 minutes), a 25-question version (about 7 minutes), and a 72-question version (about 10 minutes). Response rates were 64%, 63%, and 51% respectively. The two shorter versions performed almost identically, but the long version saw a meaningful drop. Completion rates told an even starker story: 63%, 54%, and 37%. So even among people who started the long survey, more than a third abandoned it before finishing.

Incentives

Paying participants helps, but how you pay matters more than how much. A systematic review and meta-analysis of 46 randomized controlled trials found that prepaid incentives, where participants receive money before completing the survey, are consistently more effective than promised incentives offered upon completion. One trial found that combining a small upfront payment with a promised reward for completion increased retention by 48% compared to a promised incentive alone. The psychology is simple: people feel a sense of obligation to reciprocate a gift they’ve already received.

Contact Method and Follow-Up

As the HCAHPS data shows, using multiple modes of contact boosts response rates by roughly 10 percentage points over mail alone. Follow-up reminders, personalized invitations, and shorter gaps between the initial contact and the reminder all contribute. Pre-notification, where you send a brief heads-up before the actual survey arrives, also tends to improve participation.

Response Rate vs. Completion Rate

These two terms are often confused but measure different things. Response rate captures how many people started or returned the survey out of everyone invited. Completion rate measures how many people finished the entire survey out of those who began it. You can have a solid response rate but a poor completion rate if your survey is too long or confusing. In the survey-length study above, the long version had a 51% response rate but only a 37% completion rate, meaning about a quarter of people who opened the survey gave up partway through. Partial responses create their own data quality issues, so both numbers matter.

Reporting Standards in Published Research

There’s no single minimum response rate that journals require for publication. Some older guidelines suggested 60% or higher as a benchmark, but this threshold has fallen out of favor as researchers have recognized the weak link between response rates and data quality. What journals increasingly expect instead is transparency: report the response rate clearly, explain how you calculated it, describe any follow-up efforts, and discuss whether non-response might affect your specific findings.

Even the formula itself is contested. Some researchers calculate response rate as completions divided by the total sample contacted. Others subtract undeliverable surveys (bounced emails, wrong addresses) from the denominator first. Neither approach is wrong, but they produce different numbers. When reading a study, checking exactly how the authors defined their denominator tells you more than the response rate percentage alone.