The principle of parsimony states that the simplest explanation fitting the available evidence is the one you should prefer. Often called Occam’s razor, it can be summed up in one line: “entities should not be multiplied beyond necessity.” In practice, this means that when two competing explanations account for the same observations equally well, the one making fewer assumptions wins.
The idea traces back to the 14th-century Franciscan friar William of Ockham (1287–1347), born in the village of Ockham in Surrey, England. While studying and teaching at Oxford, he used this razor-sharp logic to dismantle much of medieval metaphysics, stripping away invisible forces and abstract entities that weren’t needed to explain observable reality. His radical insistence on simplicity eventually led to a trial before the Pope in Avignon for heretical teaching. Despite the controversy, the principle became a cornerstone of modern science.
How Parsimony Works in Science
The parsimony principle is basic to all science: choose the simplest scientific explanation that fits the evidence. It doesn’t claim simpler theories are always true. It says that when you have no reason to pick the more complicated explanation, the simpler one is the better starting point. Adding complexity should be justified by data, not by speculation.
Think of it this way. If you hear hoofbeats outside your window, you could hypothesize horses or you could hypothesize zebras that escaped from a zoo. Both explanations account for the sound, but the second one requires an extra, unsupported assumption. Parsimony tells you to go with horses unless you have actual evidence of escaped zebras.
This logic guides how scientists choose between competing hypotheses across every discipline, from physics to psychology to ecology. It doesn’t forbid complex explanations. It simply raises the bar: if you want to add moving parts to your model, you need evidence for each one.
Building Evolutionary Trees
One of the clearest applications of parsimony is in evolutionary biology, where researchers use it to figure out how species are related. When building a family tree of organisms, scientists compare physical traits or DNA sequences and ask: which arrangement of branches requires the fewest evolutionary changes?
This approach, called maximum parsimony, works by searching through all possible tree shapes and counting the minimum number of genetic substitutions each one demands. The tree with the smallest total number of changes is considered the best hypothesis for how those species evolved. For example, if one tree requires a bony skeleton to evolve just once while another requires it to evolve independently in two separate lineages, parsimony favors the first tree. Both fit the data, but the second one hypothesizes an unnecessarily complicated history.
Parsimony in Statistical Models
In statistics, parsimony takes a more mathematical form. When researchers build models to explain data, they face a constant tension: a model with more variables will almost always fit your current data better, but it may be capturing noise rather than real patterns. This is called overfitting, and it makes models terrible at predicting anything new.
Tools like the Akaike Information Criterion (AIC) formalize parsimony by scoring each model on two things simultaneously: how well it fits the data and how many variables it uses. Every additional variable costs points. The model with the lowest AIC score strikes the best balance between accuracy and simplicity. Other scoring tools, including the Bayesian Information Criterion, work on the same basic logic with slightly different penalty weights. The core idea is identical to Ockham’s original insight: don’t add complexity unless the data demands it.
Parsimony in Medicine
Doctors apply a version of parsimony called diagnostic parsimony. When a patient walks in with a collection of symptoms, the instinct is to find one diagnosis that explains everything rather than assuming the person has three or four separate conditions at once. A single unifying diagnosis is simpler and, in many cases, more likely.
But medicine also has a well-known counter-rule called Hickam’s dictum: “patients can have as many diseases as they damn well please.” This acknowledges that real human bodies are messy. An elderly patient with fatigue, joint pain, and skin changes might have a single autoimmune condition, or they might genuinely have three unrelated problems. Experienced clinicians learn when to lean on parsimony and when to step back from it, especially in older patients or those with complex medical histories where multiple overlapping conditions are common.
Interpreting Animal Behavior
In animal psychology, the principle of parsimony takes a specific form known as Morgan’s Canon, proposed by the British psychologist C. Lloyd Morgan in the late 1800s. His rule: never explain an animal’s behavior as the result of a higher mental process if it can be explained by a simpler one. If a dog learns to open a gate, you should first consider trial-and-error learning before concluding the dog understands how latches work.
Morgan wasn’t saying animals lack complex thought. He was saying you need compelling evidence before you attribute it to them. If the existence of a higher cognitive ability in an animal is unknown, the most appropriate explanation uses a simpler process whose existence in that animal has already been established. This keeps researchers honest and prevents them from projecting human-like reasoning onto creatures whose inner lives they can’t directly observe.
Where Parsimony Falls Short
Parsimony is a guide, not a guarantee. Nature has no obligation to be simple, and there are real scenarios where the more complicated explanation turns out to be correct. Some disciplines are increasingly moving away from strict parsimony. In physics, systems biology, and medicine, researchers have found that extremely complex models can outperform simpler ones, predicting protein structures, improving climate forecasts, and revealing mechanisms of language acquisition that stripped-down models miss entirely.
There’s also a risk of oversimplifying. When parsimony leads to a model that leaves out important variables, the result is a biased picture of reality. The model might be easy to understand, but it communicates the wrong structure of the world. In fields like finance, engineering, and computer science, complex models often outperform simpler ones in accuracy and robustness.
The takeaway is that parsimony works best as a starting principle rather than an absolute law. It keeps you from inventing unnecessary complexity, but you should always be willing to add complexity when the evidence calls for it. The razor shaves away what isn’t needed. It was never meant to cut away what is.

