What Is Natural History in Science and Medicine?

Natural history has two distinct meanings depending on context. In science, it refers to the observation and study of the natural world, including plants, animals, geology, and ecosystems. In medicine, it describes how a disease progresses over time without treatment. Both meanings share a core idea: watching how something unfolds on its own, without intervening.

Natural History as a Scientific Discipline

Natural history is one of the oldest forms of science. It involves observing, describing, and classifying the natural world, from the behavior of birds to the formation of rock layers. Before biology, geology, and ecology existed as separate fields, they were all part of natural history. Naturalists like Charles Darwin and Alexander von Humboldt built their groundbreaking theories on years of careful observation in the field.

Many branches of modern science grew directly out of this tradition: ecology, evolutionary biology, geology, taxonomy, paleontology, and conservation biology all have their roots in natural history. Today, natural history data still serves as the baseline knowledge of natural systems that researchers need to build broader theories, test hypotheses, and uncover general principles. A biologist studying how a changing climate affects migration patterns, for instance, relies on decades of observational records collected by naturalists before them.

Despite this foundational role, natural history is losing ground in universities. Support for natural history courses is declining at many top institutions, and there’s a growing bias against observational, organism-focused biology in favor of theoretical and simulation-based research. This shift has created an ironic gap: the early giants of ecology built their careers by grounding theory in real-world observation, while their academic descendants increasingly skip the observational step altogether.

Natural History in Medicine

In medicine, natural history refers to the progression of a disease in an individual over time, in the absence of treatment. It traces the full arc from the moment of exposure or biological onset through to one of three endpoints: recovery, disability, or death. Understanding this arc helps doctors recognize diseases earlier, design better screening programs, and evaluate whether treatments actually change outcomes.

The process follows a general sequence. It begins when a person is exposed to a cause or accumulates enough risk factors for the disease process to start. Pathological changes then occur silently, with no symptoms. This hidden phase is called the incubation period for infectious diseases and the latency period for chronic ones. Symptoms eventually appear, marking the shift from subclinical to clinical disease, and the condition then progresses toward its natural endpoint.

The length of these phases varies enormously. For the flu, incubation lasts a day or two. For Alzheimer’s disease, abnormal protein buildup in the brain can begin up to 20 years before a clinical diagnosis of dementia. Alzheimer’s is often divided into three stages based on cognitive decline: a preclinical stage where thinking is still normal, a prodromal stage with mild impairment, and the dementia stage with significant functional loss. Knowing these timelines helps researchers figure out when intervention might be most effective.

Natural History vs. Clinical Course

These two terms are easy to confuse. Natural history describes what happens without intervention. Clinical course describes what happens with treatment. In practice, the line blurs because most natural history studies include patients who are receiving at least some standard care or emergency treatment, which can alter the disease’s trajectory. A purely untreated natural history is rarely observed today for ethical reasons, so researchers often work with the closest approximation available.

Why Natural History Studies Matter

In public health, mapping a disease’s natural history enables smarter screening. Two concepts depend on it. Lead time is the amount of time a screening test advances a diagnosis before symptoms would have appeared on their own. Sojourn time is how long a disease stays in a detectable but not yet symptomatic phase. If the sojourn time is very short, screening is less likely to catch the disease early enough to help. If it’s long, as with many cancers, screening programs can save lives by finding disease during that window.

Natural history data is especially critical for rare diseases. When a condition affects very few people, running a traditional clinical trial with a placebo group can be impractical or unethical. Instead, researchers use natural history studies to establish what would happen without the new treatment, creating an external comparison group. The FDA issued draft guidance in 2019 specifically encouraging natural history studies as a foundation for rare disease drug development.

These studies come in two main forms. Prospective studies recruit patients and follow them forward in time, collecting new data as the disease progresses. This approach is thorough but expensive and slow. Retrospective studies analyze existing medical records to piece together the disease’s trajectory after the fact. They’re faster and cheaper but limited by whatever data happens to be in those records, which may not include every detail a researcher needs.

AI and Disease Trajectory Modeling

Researchers are now using artificial intelligence to model natural history at a scale that was previously impossible. A transformer model called Delphi-2M, published in Nature in 2025, was trained on electronic health records from the UK Biobank and validated against Danish population registries. It can predict the rates of more than 1,000 diseases simultaneously based on an individual’s past diagnoses, lifestyle factors, and other health data.

The model can simulate how different health outcomes unfold over a decade or more, accounting for variables like smoking, alcohol consumption, and body mass index. It can also generate synthetic future health trajectories, estimating potential disease burden for up to 20 years. Practical applications include identifying people who would benefit most from diagnostic testing or flagging individuals whose disease risk is high enough to warrant screening before they meet conventional age-based criteria. The technology also reveals clusters of conditions that tend to occur together and how one diagnosis changes the likelihood of future ones, though researchers note the models can inherit biases present in their training data.