Structured interviews predict job performance roughly twice as well as unstructured ones. Across multiple large-scale analyses spanning decades of research, the pattern holds: when every candidate gets the same questions, evaluated against the same scale, the resulting hiring decisions are significantly more accurate and more fair.
The Predictive Validity Gap
The most compelling argument for structured interviews comes down to a single question: does the interview actually predict how someone will perform on the job? Researchers answer this with validity coefficients, where 0 means no predictive value and 1.0 means perfect prediction. Unstructured interviews consistently land in the low-to-mid .30s. Structured interviews score much higher.
A landmark meta-analysis by Wiesner and Cronshaw found that structured interviews had a validity coefficient of .62, exactly double the .31 for unstructured interviews. A separate analysis by McDaniel and colleagues put structured interviews at .44 versus .33 for unstructured formats, with situational interviews (a specific type of structured interview) reaching .50. Conway, Jako, and Goodman broke it down further by degree of structure: low-structure interviews scored .34, moderate structure hit .56, and high structure reached .67. The takeaway is consistent. More structure means better prediction, and the gains are not small.
A 2022 update in the Journal of Applied Psychology revisited these classic findings and adjusted the numbers downward by .10 to .20 points across most selection methods, accounting for statistical overcorrections in earlier work. Even after that revision, structured interviews emerged as the top-ranked selection procedure. The gap between structured and unstructured formats survived the recalculation.
What Makes a Structured Interview “Structured”
Three core features separate a structured interview from an unstructured conversation. First, every candidate is asked the same questions in the same order. Second, every candidate is evaluated using a common rating scale. Third, interviewers agree in advance on what constitutes a good, mediocre, or poor answer.
Building this system starts with a job analysis: identifying the actual tasks, responsibilities, and competencies the role requires, then determining which of those competencies a candidate needs on day one. Questions are written to target those specific competencies, not to explore a candidate’s personality or make small talk about shared hobbies.
The rating scale is where much of the predictive power comes from. The U.S. Office of Personnel Management recommends creating at least three proficiency levels per competency, ideally five to seven. Each level gets a label (such as unsatisfactory, satisfactory, or superior) and concrete example behaviors that illustrate what a response at that level sounds like. This gives interviewers a reference point instead of a gut feeling. When an interviewer can compare a candidate’s answer to pre-written behavioral examples rather than relying on a vague impression, scores become more meaningful and more consistent.
How Structure Reduces Bias
Unstructured interviews are especially vulnerable to cognitive biases that interviewers rarely notice in themselves. “Like me” bias, where interviewers favor candidates who share their background, hometown, or education, thrives when the conversation is free-flowing and personal. Halo bias, where one positive trait colors the entire evaluation, goes unchecked when there’s no rubric forcing attention to multiple competencies independently. Affinity bias pulls interviewers toward candidates who feel familiar.
Structured formats counteract these tendencies through their design. Scoring rubrics force interviewers to evaluate each competency separately rather than forming a single global impression. Research on behavior-based interviews for fellowship applications found that when faculty were trained to use scoring rubrics, racial biases in candidate evaluations decreased. The mechanism is straightforward: when you have to justify a numerical score against predefined criteria, your personal preferences have less room to operate.
Some organizations take this further by blinding interviewers to application materials before the interview, which eliminates additional sources of bias like assumptions based on a candidate’s school or previous employer. While no process removes bias entirely, unstructured interviews are consistently more likely to introduce it than structured ones.
Why Unstructured Interviews Feel Better
Despite the evidence, many hiring managers prefer unstructured interviews because they feel more natural and seem to reveal more about a candidate’s “true self.” This is part of the problem. The conversational flow of an unstructured interview creates a strong sense of confidence in the interviewer’s judgment, but that confidence doesn’t translate into better decisions. Two interviewers can walk out of separate unstructured interviews with the same candidate and reach completely different conclusions, each equally certain they read the person correctly.
Unstructured interviews also let interviewers pursue whatever topics interest them, which means different candidates get evaluated on different criteria. One candidate might be asked about leadership experience while another spends most of the interview discussing a technical challenge. Without a common basis for comparison, the final hiring decision reflects which conversations were most enjoyable rather than which candidate best fits the role.
Where Structure Adds the Most Value
The benefits of structured interviews scale with the stakes. For roles where a bad hire is expensive or where fairness is legally scrutinized, the case for structure is overwhelming. Government agencies, healthcare systems, and large corporations use structured formats not just because they work better, but because they create a documented, defensible record of how each candidate was evaluated.
Structure also matters more when multiple interviewers are involved. In panel interviews, a shared rating scale ensures that each interviewer is measuring the same thing. Without it, one panelist might prioritize communication skills while another focuses on technical knowledge, and the post-interview discussion becomes a negotiation between subjective impressions rather than a comparison of data.
For smaller teams or less formal hiring contexts, even partial structure helps. Using the same core questions for every candidate and scoring responses on a simple three-point scale captures much of the benefit without requiring a full job analysis. The research on incremental structure supports this: each additional element of standardization improves predictive validity. You don’t need a perfect system to beat the baseline of no system at all.
The Real Cost of Going Unstructured
Designing a structured interview takes more preparation than walking into a room and winging it. You need to analyze the job, write targeted questions, build rating scales, and train interviewers to use them. That upfront investment is real, and it’s the most common reason organizations stick with unstructured formats.
But the cost of a bad hire dwarfs the cost of interview preparation. When structured interviews predict performance nearly twice as well as unstructured ones, every hiring cycle run without structure carries a higher probability of selecting the wrong person. The downstream costs, including lost productivity, turnover, and the time and money spent rehiring, accumulate quietly. The preparation time for a structured interview is a one-time investment per role that pays off across every candidate who sits in that chair.

