A diagnosis is the process of identifying the nature and cause of a disease or health problem. It involves evaluating your symptoms, medical history, physical examination, and often lab work or imaging to arrive at a specific explanation for what’s going on in your body. While people sometimes use “diagnosis” to mean the label itself (like “a diagnosis of diabetes”), it more accurately describes the entire investigative process that leads to that label.
How the Diagnostic Process Works
Diagnosis isn’t a single event. It’s an iterative cycle of gathering information, interpreting it, forming a working theory, and then testing that theory until the answer becomes clear. The process typically begins before you even see a doctor, when you first notice something feels wrong and decide to seek care.
Once you’re in a clinical setting, four main activities drive the process forward: taking your medical history (your symptoms, when they started, what makes them better or worse, your family history), performing a physical exam, ordering diagnostic tests like bloodwork or imaging, and in some cases referring you to a specialist. As information accumulates, your doctor narrows a broad list of possibilities down to one or two leading candidates, then verifies whether that explanation fits your full picture: your symptoms, your risk factors, and your overall health context.
What’s striking is how much of the work happens in that initial conversation. A landmark study of 80 medical outpatients found that the patient’s history alone led to the correct diagnosis 76% of the time. The physical exam accounted for another 12% of diagnoses, and lab tests led to the answer in 11%. Tests and exams still matter enormously, though. Even when they don’t change the diagnosis, they help rule out other possibilities and raise a doctor’s confidence in the correct one, bumping it from about 7 out of 10 after the history to over 9 out of 10 after lab results come back.
Differential Diagnosis vs. Final Diagnosis
When your symptoms could point to more than one condition, your provider creates what’s called a differential diagnosis list. This is simply a ranked set of all the conditions that could plausibly explain what you’re experiencing. It is not your diagnosis. It’s the starting lineup of suspects.
From there, your doctor uses targeted tests and observations to systematically rule out conditions on the list. A blood test might eliminate one possibility, an imaging scan might strengthen another. As findings accumulate, the list shrinks until the most likely condition becomes clear. That’s your final diagnosis, the specific explanation your medical team is confident enough to act on with a treatment plan. In complex cases, this narrowing process can take days, weeks, or even longer, especially when symptoms overlap across multiple conditions.
How Diagnoses Are Classified
To keep diagnoses consistent and trackable across hospitals, countries, and research studies, the medical world relies on standardized classification systems. The two most widely used are the International Classification of Diseases (ICD), maintained by the World Health Organization, and the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association. The ICD covers the full range of diseases and health conditions and is currently in its 11th revision (ICD-11), approved in 2019. The DSM focuses specifically on mental health disorders and is in its 5th edition.
These systems give every recognized condition a specific code. When a doctor records your diagnosis, it gets tagged with one of these codes, which is what insurance companies, hospitals, and public health agencies use to track patterns, process claims, and allocate resources. If you have multiple conditions at once, the first-listed diagnosis is typically considered the primary one and is used to categorize your visit for billing and statistical purposes.
How Accurate Is the Process?
Diagnosis is far from perfect. Error rates across frontline care settings hover between roughly 4% and 10%. A 2025 analysis in BMJ Quality & Safety found that harmful diagnostic errors affected about 7.2% of hospital inpatients, 5.2% of emergency department visits, and 6.3% of primary care encounters. Those numbers represent cases where the wrong or delayed diagnosis actually caused harm.
Second opinions can catch a meaningful share of these errors. A Mayo Clinic study found that when patients were referred for a second opinion, the new diagnosis was completely different from the original 21% of the time. In 66% of cases, the second opinion refined or better defined the initial diagnosis. Only 12% of the time did the second evaluation simply confirm the first one. For complex, serious, or life-altering diagnoses, seeking another perspective can be a practical safeguard.
How Diagnostic Tests Are Evaluated
Not all tests are equally good at catching or ruling out a condition. Two key measures describe how reliable a test is: sensitivity and specificity. A test with high sensitivity is good at detecting people who do have the condition. It catches most true cases but may also flag some people who are actually healthy (false positives). A test with high specificity is good at correctly identifying people who don’t have the condition, meaning a negative result is very reliable, but it may miss some people who are genuinely sick (false negatives).
These two qualities work in tension. As a test becomes more sensitive, it tends to become less specific, and vice versa. This is why doctors often use screening tests (high sensitivity, casting a wide net) followed by confirmatory tests (high specificity, verifying the result). Understanding this tradeoff helps explain why a single test rarely gives a definitive answer on its own, and why your doctor may order follow-up testing even after a positive or negative result.
AI’s Growing Role in Diagnosis
Artificial intelligence tools are increasingly being tested as diagnostic aids. A 2025 meta-analysis published in Nature’s npj Digital Medicine compared the diagnostic accuracy of generative AI models against physicians. Overall, there was no statistically significant difference between AI and physicians as a whole, or between AI and non-specialist doctors. Several AI models, including newer versions of GPT-4 and Gemini, performed slightly better than non-specialists, though the gap wasn’t significant.
Expert physicians, however, were a different story. They outperformed AI models by a significant margin, with accuracy roughly 16 percentage points higher on average. The takeaway from the research is that AI is not yet a reliable substitute for experienced clinicians, but it shows real promise as a support tool in settings where specialist expertise isn’t immediately available, and as a training resource for medical students learning the diagnostic process.

