Structured vs. unstructured interviews: what 25 years of research tells us
Schmidt & Hunter's 1998 meta-analysis remains the most replicated finding in industrial psychology. The implications are still being ignored at scale.
In 1998, Frank Schmidt and John Hunter published a meta-analysis of 85 years of research on personnel selection. They evaluated 19 different selection methods and ranked them by their ability to predict job performance. The paper has been replicated, extended, and cited thousands of times. Its central finding has never been seriously challenged.
Structured interviews — where the same questions are asked to every candidate and responses scored against a rubric — predict job performance at roughly twice the rate of unstructured interviews. The validity coefficient for structured interviews is 0.51. For unstructured interviews, it's 0.38. That gap compounds significantly at scale.
What makes an interview "structured"
The term is often misunderstood. A structured interview isn't one where you follow a script or avoid follow-up questions. It's one where three things are true: the questions are determined in advance and tied to the competencies required for the role; the same questions are asked to every candidate; and responses are scored against a pre-defined rubric by interviewers who are applying the same criteria.
The rubric is the critical element. Without it, two interviewers watching the same response will reach different conclusions based on their own mental models of what "good" looks like — which vary by interviewer background, bias, and the candidate they saw immediately before.
Why most companies still don't do it
Despite decades of evidence, the majority of hiring processes remain largely unstructured. The reasons are practical: building a proper competency framework and scoring rubric for every role requires significant upfront investment. Training interviewers to use rubrics consistently is ongoing work. And there's cultural resistance — experienced interviewers often believe their intuition is more reliable than a scoresheet.
The research suggests the opposite. Experienced interviewers are often more confident in their intuitions but not more accurate. Confidence and accuracy are poorly correlated in selection decisions.
What this means in 2026
The gap between what the research recommends and what companies actually do has been stable for decades. What's changed is the stakes: remote hiring has made unstructured evaluation even less reliable, because you've removed the physical context cues that interviewers (consciously or not) rely on. You're left with a video call, a rehearsed candidate, and an interviewer making pattern-matching decisions with less signal than ever.
Structured evaluation — with rubrics, evidence-anchoring, and documented scoring — isn't a nice-to-have for remote teams. It's the only evaluation methodology with a credible evidence base for the environment you're actually operating in.