Artificial intelligence has been used in healthcare for roughly five decades, with the earliest experimental systems appearing in the 1970s. What started as rule-based programs that tried to mimic a doctor’s reasoning has evolved into a massive industry: the FDA now lists over 1,400 AI-enabled medical devices authorized for use in the United States. The path from those first prototypes to today’s tools involved several distinct phases, each building on the last.
The 1970s: Where It Started
The first serious attempts to bring AI into medicine began in the early-to-mid 1970s with programs known as expert systems. These weren’t learning from data the way modern AI does. Instead, researchers manually coded thousands of medical rules into software, essentially trying to capture a physician’s decision-making process in a program.
Two landmark systems from this era stand out. MYCIN, developed at Stanford around 1972, was designed to identify bacterial infections and recommend antibiotics. It never saw clinical use, but it proved the concept worked. INTERNIST-1, built at the University of Pittsburgh, took on a far broader challenge: diagnosing diseases across all of internal medicine. On test cases, its diagnostic performance matched that of staff physicians at a university hospital. Its successor, CADUCEUS, aimed to fix known limitations like the inability to reason about how diseases progress over time or account for how severe a patient’s symptoms were. Neither system was ever released for general medical use, but they laid the intellectual groundwork for everything that followed.
The 1980s and 1990s: Slow Progress
Expert systems hit a ceiling. They required enormous manual effort to build and maintain, and they struggled with the messiness of real clinical data. The 1980s saw continued academic research but limited real-world adoption. Hospitals weren’t equipped with the digital infrastructure to run these tools, and the systems themselves were brittle, often failing when a patient’s case didn’t fit neatly into preprogrammed rules.
During the 1990s, a quieter shift began. Machine learning techniques started replacing hand-coded rules. Instead of telling a program every possible rule, researchers fed it examples and let it find patterns. Early applications focused on relatively structured problems: predicting patient outcomes in intensive care, flagging abnormal lab results, and analyzing heart rhythms. These tools were modest in scope but represented a fundamental change in approach, one that would pay off dramatically once computing power caught up.
The 2010s: Deep Learning Changes Everything
The real explosion came in the mid-2010s, driven by deep learning, a technique that uses layered neural networks to process complex data like medical images. By 2015, researchers at Johns Hopkins demonstrated that deep learning could match the performance of human ophthalmologists in diagnosing age-related macular degeneration, a leading cause of vision loss. Similar breakthroughs followed rapidly in radiology, pathology, and dermatology.
Medical imaging became the proving ground for modern healthcare AI. The combination of large digital image databases, powerful graphics processors, and refined algorithms meant that software could suddenly detect tumors, fractures, and retinal disease with accuracy comparable to trained specialists. This wasn’t a laboratory curiosity anymore. In 2018, the FDA cleared the first fully autonomous AI diagnostic system, one that could screen for diabetic eye disease without a physician needing to interpret the results. That clearance marked a turning point: AI went from assisting doctors to, in narrow cases, making independent clinical decisions.
Where Things Stand Now
The scale of AI in healthcare today would be unrecognizable to the researchers who built INTERNIST-1. As of early 2026, the FDA has authorized over 1,430 AI-enabled medical devices. The vast majority focus on radiology (detecting findings on X-rays, CT scans, and MRIs), but the technology now spans cardiology, gastroenterology, neurology, and many other specialties.
Beyond imaging, AI tools are used to predict which hospitalized patients are most likely to deteriorate, to identify early signs of sepsis, to accelerate drug discovery, and to help pathologists analyze tissue samples. Electronic health records are mined by algorithms that flag patients at risk for conditions like heart failure or diabetes before symptoms become severe. Natural language processing, the same technology behind chatbots, helps extract useful information from the unstructured text in clinical notes.
The growth has been exponential. The majority of those 1,430 FDA authorizations have come in just the last five or six years. What took decades to get off the ground is now expanding faster than regulators, hospitals, and medical schools can fully absorb.
A 50-Year Arc in Perspective
The history breaks into three rough eras. From the 1970s through the early 2000s, AI in healthcare was largely an academic exercise: impressive prototypes that rarely left the lab. From roughly 2012 to 2018, deep learning made the technology genuinely competitive with human experts on specific, well-defined tasks. From 2018 onward, regulatory clearances, hospital adoption, and commercial investment turned AI from a research topic into a routine part of clinical infrastructure.
So while the technology has technically been in development for about 50 years, the version of AI that most people picture when they hear the term, software that reads scans, predicts risks, and assists with diagnosis at scale, is really a product of the last decade. The foundations go back much further, but the practical impact is remarkably recent.

