A structured interview in psychology is a standardized method of asking every person the same pre-set questions, in the same order, and scoring their responses using a fixed rubric. It’s used in two major areas: clinical psychology, where it helps diagnose mental health conditions, and industrial-organizational psychology, where it improves hiring decisions. The defining feature is consistency. By removing the interviewer’s freedom to improvise, structured interviews produce more reliable and less biased results than open-ended conversations.
How Structured Interviews Work
In a structured interview, the interviewer reads pre-written questions in a fixed sequence and rates each response as positive, negative, or at a threshold level. There’s no room for rephrasing, skipping questions, or following tangents. The questions correspond directly to specific criteria, whether those are diagnostic symptoms in a clinical setting or job-related competencies in a hiring context. A scoring rubric defines what counts as each type of response, so two different interviewers evaluating the same person should arrive at similar conclusions.
This stands in contrast to an unstructured interview, where the interviewer decides what to ask on the spot and evaluates responses based on overall impressions. Unstructured interviews feel more like natural conversation, but that flexibility introduces inconsistency. The interviewer’s mood, personal preferences, and unconscious biases all have more room to influence the outcome.
Clinical Structured Interviews
In clinical psychology, structured interviews are the gold standard for diagnosing mental health conditions. The most widely used is the Structured Clinical Interview for DSM-5 (SCID-5), which is organized into diagnostic modules covering mood disorders, psychotic disorders, substance use disorders, anxiety disorders, obsessive-compulsive and related disorders, eating disorders, somatic symptom disorders, certain sleep disorders, trauma-related disorders, and what are grouped as “externalizing disorders” like ADHD and intermittent explosive disorder. A clinician works through the relevant modules, asking each standardized question and mapping responses to DSM-5 criteria.
Another common tool is the Mini-International Neuropsychiatric Interview (MINI), designed for rapid screening of 14 major psychiatric disorders. Where the SCID-5 provides a thorough diagnostic workup, the MINI prioritizes speed and is often used in research settings or as a first-pass screening tool.
Training matters. Administering the SCID-5 reliably typically requires a two-day training program (roughly 14 hours), where clinicians learn to follow the interview protocol, apply the scoring criteria, and handle common challenges like ambiguous responses. Without this training, even a structured format can produce inconsistent results.
Why Accuracy Improves
The difference in diagnostic accuracy between structured and unstructured approaches is substantial. One study comparing the two methods against a gold-standard consensus diagnosis found that unstructured interviews agreed with the correct diagnosis only 53.8% of the time, with a reliability score rated “fair.” Structured interviews using the SCID hit 85.7% agreement, rated “excellent.” That’s a jump from getting the diagnosis right about half the time to getting it right more than five out of six times.
Inter-rater reliability tells a similar story. In a study of residency admissions interviews, structured interviews achieved an inter-rater reliability of 0.82, meaning two different interviewers evaluating the same candidate came to very similar conclusions. Unstructured interviews scored between 0.46 and 0.51 on the same measure, barely above chance agreement. Notably, the two unstructured panels’ scores didn’t even correlate with the structured interview scores, suggesting the two formats were essentially measuring different things.
Structured Interviews in Hiring
Industrial-organizational psychologists have studied structured interviews extensively as a tool for predicting job performance. The two main formats are situational interviews (“What would you do if…”) and behavioral interviews (“Tell me about a time when…”). Both use standardized questions tied to the specific skills and traits the job requires, plus scoring rubrics that anchor each rating to concrete examples of strong, average, and weak responses.
Multiple meta-analyses have found that situational interviews predict job performance with corrected validity coefficients between 0.41 and 0.47. In practical terms, that means structured interviews are among the strongest single predictors of how well someone will actually perform in a role. One study found that scores on a situational interview correlated 0.67 with performance on a job simulation, a remarkably strong relationship for any selection tool.
How Structure Reduces Bias
Unstructured interviews are vulnerable to several well-documented biases. “Like me” bias leads interviewers to prefer candidates who share their background, hometown, or education. Halo bias causes one positive trait to color the entire evaluation. Affinity bias draws interviewers toward people they feel a personal connection with. Because unstructured interviews lack scoring rubrics, these biases operate unchecked in how answers are interpreted and how candidates are ranked.
Structured interviews counter this by anchoring every evaluation to specific, observable criteria. When interviewers must rate each answer against a defined rubric rather than forming a global impression, there’s less room for personal preference to drive the outcome. Research on fellowship admissions found that faculty trained in behavior-based structured interviewing showed reduced racial bias in their candidate evaluations, specifically because the scoring rubrics forced them to evaluate what candidates said rather than who they were. Adding blinded procedures on top of structure can eliminate halo, horn, and affinity biases even further.
The Rapport Trade-Off
The most common criticism of structured interviews is that they can damage the relationship between interviewer and interviewee. The rigid format can feel impersonal. When a clinician is reading from a script and checking boxes, it’s harder to demonstrate the warmth and empathy that help people open up about sensitive topics. There’s a real risk that the interviewer becomes so focused on following the protocol that the interaction feels mechanical, which can make people less willing to disclose difficult experiences.
This is a genuine limitation, not just a perception problem. In clinical settings, patients who feel alienated by the interview process may underreport symptoms or disengage entirely, which undermines the very accuracy the structure is designed to provide. Skilled clinicians learn to balance fidelity to the protocol with genuine attentiveness, but that balance takes practice and is harder than it sounds.
Structured vs. Semi-Structured Interviews
Semi-structured interviews sit between the two extremes. They use a standard set of questions and scoring criteria but give the interviewer some flexibility to ask follow-up questions, clarify ambiguous answers, or explore unexpected responses. This preserves much of the reliability advantage while allowing for a more natural conversational flow. Many clinical tools, including some versions of the SCID, are technically semi-structured for this reason.
In practice, the choice between fully structured and semi-structured depends on the goal. Research studies that need maximum consistency across hundreds of participants tend toward full structure. Clinical evaluations where understanding the nuance of a patient’s experience matters often benefit from semi-structured flexibility. Hiring contexts vary: high-volume screening favors full structure, while senior-level positions sometimes use semi-structured formats to explore leadership qualities that resist simple rubric scoring.

