Feasibility in research is the process of figuring out whether a study can actually be done before committing the full resources to do it. It answers a deceptively simple set of questions: Can this be done? Should we proceed? And if so, how? Rather than testing whether a treatment or intervention works, a feasibility study tests whether the research itself is practical, affordable, and likely to produce useful results at full scale.
What a Feasibility Study Actually Does
A feasibility study produces findings that help researchers decide whether an intervention or approach deserves a full-scale trial. Think of it as a stress test for the research plan itself. Can you recruit enough participants? Will they stick with the study long enough? Is the intervention safe and deliverable in real-world conditions? These are the kinds of questions feasibility research is designed to answer.
This matters because full-scale studies are expensive, time-consuming, and ethically significant (you’re asking real people to participate). Running a feasibility study first lets researchers identify problems in their methods or protocols early, modify what needs fixing, and discard approaches that simply won’t work. The goal is to advance only those interventions that have a realistic chance of success, rather than burning resources on studies destined to fail for logistical reasons.
Key Domains Researchers Assess
Feasibility research typically evaluates several specific areas, each representing a potential point of failure for a larger study:
- Recruitment capability: Whether the study can actually find and enroll enough participants. This is the single biggest source of delays in clinical research. Roughly 35% of study delays trace back to recruitment problems, nearly one in five investigators fail to enroll any patients at all, and only about one-third of investigators consistently enroll participants throughout a study.
- Retention: Whether participants stay enrolled for the full duration of the study, or drop out at rates that would undermine the results.
- Intervention delivery and adherence: Whether the intervention can be delivered as intended (called fidelity), whether participants actually follow the recommendations, and whether the process is safe.
- Data collection procedures: Whether the chosen outcome measures and data collection tools work in practice.
- Acceptability: Whether participants and staff find the study procedures reasonable and tolerable, including randomization if applicable.
- Barriers and facilitators: What obstacles stand in the way of running the study smoothly, and what helps.
Each of these domains can sink a full-scale trial if left unexamined. A feasibility study surfaces these problems when they’re still cheap to fix.
Feasibility Studies vs. Pilot Studies
These two terms overlap, and researchers themselves sometimes use them interchangeably. But there is a meaningful distinction. A feasibility study asks whether something can be done, and if so, how. A pilot study asks the same questions but adds a specific design feature: it runs the future study, or part of it, on a smaller scale to see how it performs in practice.
The clearest way to think about it: all pilot studies are feasibility studies, but not all feasibility studies are pilot studies. A feasibility study might involve surveying potential participants about their willingness to be randomized, or testing whether a questionnaire captures the right data. None of that requires actually running a miniature version of the trial. A pilot study, by contrast, implements the intervention (or parts of it) exactly as planned for the larger trial, just with fewer people.
Pilot studies are generally not designed to detect whether an intervention works. They lack the statistical power for that. Instead, they reveal whether the machinery of the study functions: the randomization process, the delivery of the intervention, the follow-up schedule.
How Researchers Decide Whether to Proceed
Feasibility studies need clear criteria for what counts as success, established before the study begins. Many research teams use a “stop, change, go” system, sometimes called a traffic light approach. For each key metric (like recruitment rate or retention percentage), the team sets three thresholds in advance.
A “go” threshold means there are no issues that would impede a full trial. A “change” threshold means problems exist but could potentially be fixed with modifications. A “stop” threshold means the issues cannot be resolved, and the full trial should not proceed as planned. These thresholds are typically expressed as percentages. For example, a team might decide that retaining 80% or more of participants is “go,” 60-79% is “change,” and below 60% is “stop.”
This framework forces researchers to be honest about their results rather than rationalizing away problems. It also gives funders and ethics boards a transparent basis for deciding whether to invest in the full study.
Sample Size in Feasibility Research
One of the most common questions about feasibility studies is how many participants you need. Because the goal isn’t to prove an intervention works, traditional power calculations (which determine how many people you need to detect a statistically significant effect) don’t directly apply. Instead, sample size is driven by practical considerations: participant flow, budget, and how many people are needed to reasonably evaluate feasibility goals.
A common rule of thumb is 30 participants per group for basic feasibility questions. For qualitative work, where the goal is to reach a point where no new themes emerge from interviews, 30 or fewer participants often suffice. But this number can be misleading. If the goal is to estimate something like an adherence rate or the proportion of eligible participants who agree to be randomized with any useful precision, sample sizes of at least 70 per group are often needed. With only 30 per group, the confidence intervals around those estimates become so wide that the numbers are hard to act on. If estimating group differences is a goal, researchers may need 70 to 100 per group.
Common Reasons Studies Fail Feasibility
Resource constraints are the most frequent feasibility killer. Every method a research team chooses, from questionnaires to interviews to focus groups, carries different costs in time, money, and staff effort. These costs compound when methods are combined, which is common in feasibility work.
Recruiting participants (or the professionals who deliver interventions) is consistently the hardest part. In one comparative study, questionnaires sent to health professionals achieved response rates of just 15-19%, despite multiple rounds of design refinement and piloting. Interviews, while producing richer data, took five months from start to finish, largely because of the difficulty of getting professionals to agree to participate. Even scheduling required going through institutional gatekeepers after direct invitations failed.
Time is the other major constraint. Feasibility work competes with other priorities, and the longer it takes, the more it costs. Methods that seem straightforward on paper, like developing and distributing a well-designed questionnaire, can absorb months of iteration and still produce disappointing return rates.
Reporting Standards for Feasibility Trials
Feasibility and pilot trials that involve randomization have their own reporting guidelines, an extension of the widely used CONSORT framework for clinical trials. These guidelines require researchers to report several items that wouldn’t appear in a standard trial report. Teams must describe how participants were identified and consented, since the recruitment process itself is one of the things being evaluated. They must state any prespecified criteria they used to decide whether to proceed to a full trial (the “stop, change, go” thresholds). Any important unintended consequences must be reported, along with the implications for moving from the feasibility phase to a definitive trial, including proposed changes to the study design.
These requirements exist because feasibility studies serve a different purpose than confirmatory trials. Their value lies in the practical lessons they generate, not in effect sizes or statistical significance. Without transparent reporting, those lessons are lost to the broader research community, and other teams end up repeating the same mistakes.
Beyond Clinical Research
While feasibility assessment is most formally developed in clinical and health research, the concept applies across disciplines. In any large-scale research project, feasibility touches on several overlapping dimensions. Technical feasibility asks whether the required resources, technologies, and expertise exist to carry out the work. Operational feasibility asks whether the study design is practical to implement and maintain over time. Economic feasibility asks whether the budget can cover everything from raw materials and equipment to personnel costs and participant compensation, and whether the project can sustain itself through the period before results materialize.
In clinical trials specifically, feasibility assessment also includes evaluating potential research sites: how many eligible patients they see, whether the local investigators have the capacity and interest to participate, and whether the infrastructure exists to handle the study’s data and regulatory requirements. These site-level assessments directly predict whether a multi-site trial will meet its enrollment targets or stall out.

