The Delphi technique is a structured method for gathering expert opinions through multiple rounds of anonymous questionnaires, with the goal of reaching group consensus on a topic where no clear answer exists. Developed by the RAND Corporation in the 1950s to forecast the effects of technology on warfare, it has since become a widely used research and decision-making tool in healthcare, policy, business, and education.
What makes it different from a regular meeting or survey is its core design: experts never interact face to face. Instead, they respond independently, see a summary of what the group thinks, and then get a chance to revise their answers. This cycle repeats until the group converges on a shared position.
How the Process Works
A Delphi study follows a repeating loop of questionnaires and feedback, typically running two to four rounds. The process starts when a research team identifies a problem that lacks a definitive evidence base, something where expert judgment is the best available tool. That might be predicting how a new technology will reshape an industry, deciding which clinical outcomes matter most in a trial, or setting priorities for a national policy.
In the first round, panelists receive a questionnaire and respond independently. Their answers are collected, anonymized, and summarized statistically, usually as averages or medians that show where the group clusters and where it diverges. In the second round, each panelist sees this group summary alongside their own previous response. They can then stick with their original answer or revise it in light of what the broader panel thinks. This cycle of respond, review, and revise continues until the group’s answers stabilize or a predefined consensus threshold is met.
The key word is “controlled feedback.” Panelists don’t see raw comments or know who said what. They see a curated statistical picture of the group’s collective opinion, which keeps the conversation focused on evidence and reasoning rather than personality or status.
Four Features That Define the Method
Every version of the Delphi technique shares four core characteristics that set it apart from ordinary group decision-making:
- Anonymity: Panelists don’t know who else is participating. This prevents dominant personalities, senior figures, or institutional politics from steering the outcome. A junior researcher’s opinion carries the same visual weight as a department head’s.
- Iteration: The process runs in multiple rounds, giving participants the chance to rethink and refine their positions rather than locking in a first impression.
- Controlled feedback: Between rounds, the research team shares a statistical summary of group responses. This replaces the free-for-all of open debate with a structured snapshot of collective thinking.
- Statistical aggregation: The final result is expressed as a measurable group response, not a verbal agreement. This gives the outcome a degree of objectivity that a committee vote or open discussion typically lacks.
Why Anonymity Matters
In a traditional meeting, a handful of voices tend to dominate. People defer to the most senior person in the room, avoid contradicting a popular opinion, or stay silent rather than risk being wrong in front of colleagues. These dynamics, sometimes called groupthink, can push a group toward a consensus that reflects social pressure more than genuine agreement.
The Delphi technique sidesteps this entirely. Because panelists respond through anonymized questionnaires, no one knows whose opinion they’re reading. There’s no eye contact, no interruption, no reputation at stake. Each person can change their mind between rounds without embarrassment, and outlier opinions get the same visibility in the summary statistics as majority positions. The result is a consensus built on the substance of the responses, not on who delivered them most confidently.
Selecting the Expert Panel
Panel selection is widely considered the most crucial step in a Delphi study. The quality of the final consensus depends entirely on who is asked to contribute. Researchers need to decide what qualifies someone as an “expert” for the specific question at hand, which could mean years of clinical experience, published research in a niche field, or firsthand professional knowledge of a particular system or process.
There is no universal standard for panel size. Published studies vary widely, and the methods used to identify and recruit panelists are inconsistent across the literature. What matters more than raw numbers is that the panel is relevant and reasonably homogeneous in its area of expertise. A Delphi study on surgical outcomes, for instance, needs surgeons and outcomes researchers, not a broad sample of all healthcare workers. Researchers also need to consider how to define “expert” transparently, since vague selection criteria can undermine the credibility of the results.
How Consensus Is Measured
There is no single, universally accepted threshold for when a Delphi study has achieved consensus. A systematic review of published Delphi studies found that the median threshold researchers use is 75% agreement, but the range spans from as low as 50% to as high as 97%. The steering committee running the study is expected to define the consensus criteria before the first round begins, not after seeing results.
One common approach uses a rating scale where panelists score their agreement from 0 (complete disagreement) to 10 (complete agreement). A score of 8 or above counts as agreement, and the group reaches consensus on a statement when 80% or more of panelists rate it at that level. If a statement falls below the threshold after all rounds, it’s either revised and re-tested or dropped from the final output.
The study also ends when responses stabilize, meaning panelists stop changing their answers between rounds. At that point, further rounds won’t produce meaningful movement, and the process closes regardless of whether full consensus was reached on every item.
Where the Delphi Technique Is Used
The method started in military forecasting but quickly spread. In healthcare, it’s used to develop clinical guidelines, define core outcome sets for research trials, and establish diagnostic criteria when randomized evidence is thin. Medical societies frequently use Delphi panels to produce consensus statements on imaging protocols, treatment standards, and quality benchmarks.
Outside of medicine, the technique appears in technology forecasting, education policy, environmental planning, and business strategy. Any situation where decisions must be made under uncertainty, and where expert judgment is the most practical source of guidance, is a natural fit. The method is especially useful when the experts are geographically scattered, since the entire process can run remotely through digital questionnaires.
The Real-Time Delphi Variant
The classical Delphi process can take weeks or months because the research team must collect all responses, analyze them, prepare feedback, and redistribute the questionnaire for each round. The Real-Time Delphi, developed by Theodore Gordon and Adam Pease, compresses this timeline by eliminating fixed rounds altogether.
In a Real-Time Delphi, the survey lives on a web platform. As soon as a panelist submits an answer, they see a live summary of how the group has responded so far. They can revisit the survey at any time during the study period, see updated feedback reflecting all responses to that point, and change their ratings accordingly. This preserves the core benefits of iteration and controlled feedback while cutting out the waiting periods between rounds. It’s particularly useful in time-sensitive situations where a decision can’t wait for a months-long traditional process.
The tradeoff is that feedback in a Real-Time Delphi reflects only the participants who have responded so far, not the full panel. Early responders see a less complete picture than those who log in later, which can introduce subtle timing effects that the classical approach avoids.
Limitations to Keep in Mind
The Delphi technique is a powerful tool, but it has real weaknesses. Participant dropout is a persistent problem. Because the process requires multiple rounds of engagement over days or weeks, panelists lose motivation or become too busy to continue. High attrition between rounds can skew results if the people who drop out held systematically different views from those who remained.
Investigator bias is another concern. The research team designs the questionnaire, selects which feedback to share, and decides how to frame the summary statistics. Each of these steps involves judgment calls that can subtly steer the panel toward a particular outcome, even unintentionally. Transparent reporting of methods, including how panelists were selected, how questions were worded, and how consensus was defined, is the main safeguard against this.
Finally, the method produces a consensus of opinion, not proof. When experts agree, it may simply reflect shared assumptions within a discipline rather than objective truth. A Delphi consensus is strongest when used to guide decisions in the absence of hard evidence, and weakest when treated as a substitute for evidence that could realistically be gathered through other research methods.

