What Was Believed About Mental Illness in the 1800s?

In the 1800s, mental illness was widely understood as a failure of character, a punishment from God, or a disease of the brain, depending on who you asked and when in the century you were asking. These beliefs shifted dramatically over the course of the century, moving from ancient humoral theories and religious explanations toward early neuroscience, but older ideas about sin and moral weakness persisted alongside newer medical thinking the entire time.

Humoral Theory and Its Decline

At the start of the 1800s, many physicians still operated under some version of humoral medicine, an ancient framework that attributed illness to imbalances in the body’s fluids. Treatments rooted in this model included bloodletting, purging, and emetics designed to restore balance. But this approach was losing ground. By the early decades of the century, doctors increasingly rejected humoral explanations in favor of viewing the brain and nervous system as the seat of madness. That shift was foundational: it moved mental illness from a vague, whole-body problem to something that could potentially be studied, located, and treated through physical methods.

Sin, Possession, and Religious Insanity

Religious explanations for mental illness carried enormous weight throughout the 1800s, especially in the United States. One of the more common admission diagnoses at American asylums was “religious insanity,” a now-extinct category that reflected a widespread theory: that religious belief and practice could actually cause madness. This diagnosis emerged at the intersection of Protestant revival movements and the growing asylum system, and it served a dual purpose. It gave psychiatrists a framework for patients who exhibited intense spiritual experiences, and it reinforced cultural anxieties about religious extremism.

Beyond formal diagnoses, many ordinary people still saw mental illness as a sign of moral corruption or divine punishment. The idea that madness resulted from sinful living, weak will, or demonic influence had deep roots in Western culture and didn’t simply vanish when doctors started examining brains. These beliefs shaped how families treated mentally ill relatives, how communities decided who needed confinement, and how patients understood their own suffering.

The Moral Treatment Movement

One of the most significant shifts in the first half of the century was the rise of moral treatment. This approach emphasized kindness, structure, and spiritual development rather than physical punishment or restraint. It called for humane interactions between staff and patients and operated on the belief that a calm, orderly environment could restore a disordered mind. Moral treatment flourished in American mental hospitals during the first half of the 1800s and represented a genuine departure from the chains, isolation cells, and beatings that had characterized earlier care.

The movement had real advocates with real impact. Dorothea Dix, a schoolteacher turned reformer, toured jails and poorhouses where mentally ill people were kept in horrific conditions and lobbied state legislatures for change. She played an instrumental role in the founding or expansion of more than 30 hospitals for the treatment of the mentally ill across the United States. Her work helped establish the principle that society had an obligation to care for people with mental illness rather than simply warehouse them.

How Women Were Diagnosed Differently

Gender profoundly shaped what counted as mental illness and who received a diagnosis. Hysteria, one of the most common labels applied to women, came with a staggering list of symptoms: seizures, paralysis, tremors, amnesia, anxiety attacks, a sensation of suffocation, loss of speech, deep sleep, and motor problems that didn’t match any known anatomy. The diagnosis was rooted in centuries-old beliefs that women were physically and morally inferior, “weak and easily influenced.” Earlier versions attributed the condition to a “wandering womb” caused by a lack of sexual activity or failure to bear children.

By the 1800s, doctors had largely moved past the wandering womb idea, but the social logic underneath it persisted. Women could be diagnosed based on irritability, laziness, “unruly social life,” or what physicians called the “secondary advantage” of using symptoms to manipulate their environment. In practice, this meant that behaviors considered assertive or inconvenient in women could be reframed as psychiatric symptoms. Elderly women, unmarried women, and women who didn’t conform to expected social roles were particularly vulnerable to diagnosis and institutionalization.

What Diagnoses Looked Like

There was no standardized diagnostic system in the 1800s. By the 1880 census, the United States formally distinguished seven categories of mental illness: mania, melancholia, monomania, paresis, dementia, dipsomania, and epilepsy. These categories were broad and often overlapping. Mania covered states of agitation and excitement. Melancholia roughly corresponded to what we’d now call severe depression. Monomania described obsessive fixation on a single idea or delusion. Dipsomania referred to uncontrollable craving for alcohol. Paresis, a progressive condition caused by untreated syphilis infection in the brain, was one of the few mental illnesses with a clearly identifiable physical cause.

These categories tell you something important about how the era thought about mental illness: it was sorted by observable behavior rather than underlying mechanism. Two patients with wildly different problems might receive the same label if their outward symptoms looked similar, because doctors simply didn’t have the tools to distinguish what was happening inside the brain.

Physical Treatments and Their Logic

The treatments patients received reflected the era’s theories. Early in the century, when humoral thinking still had influence, bleeding and purging were standard. As the brain became the accepted seat of mental illness, new physical interventions emerged. In the early 1800s, physicians designed the first manufactured showers specifically to treat the insane. Sustained falls of cold water were prescribed to cool what doctors believed were hot, inflamed brains and to instill fear that would “tame impetuous wills.” That second rationale reveals how closely treatment was tied to ideas about discipline: even physical interventions were partly about breaking the patient’s resistance.

Rotating chairs, prolonged baths, and various forms of mechanical restraint were also common. These weren’t seen as cruel by the physicians who used them. They followed logically from the prevailing belief that mental illness was a physical disturbance in the brain that could be corrected through physical means, or that it was a failure of willpower that could be shocked back into order.

The Asylum System Takes Shape

The 1800s saw the construction of a massive institutional infrastructure for the mentally ill, particularly in England and the United States. In England, a series of laws built the system piece by piece. The 1774 Regulation of Madhouses Act required private facilities to be licensed but left public institutions unregulated. The 1808 County Asylum Act gave local authorities the power to build asylums for poor patients, though take-up was slow: only nine had been built by 1827. The 1834 Poor Law Amendment Act made it illegal to hold a dangerous mentally ill person in a workhouse for more than fourteen days.

The real turning point came with the 1845 Lunacy Act, which required every county and borough in England to provide asylum accommodation funded by local taxes. It also established the Commissioners in Lunacy, a national inspectorate with real authority, and required two medical certificates before a person could be confined. This was simultaneously a humanitarian advance and the beginning of mass institutionalization. Asylums that were designed to house dozens of patients in a therapeutic environment eventually swelled to hold hundreds, then thousands, and the quality of care deteriorated accordingly.

The Brain Becomes Central

By the second half of the century, the most advanced medical thinking had shifted decisively toward viewing mental illness as a disease of the brain. In 1861, the French physician Paul Broca presented landmark evidence linking damage to a specific brain region with loss of speech, helping to establish the principle that different parts of the brain controlled different functions. Asylum doctors, particularly in England, began performing post-mortem examinations on patients who had died, looking for visible damage in the brain that could explain their symptoms. James Crichton-Browne, a prominent British asylum physician, made this kind of pathological anatomy the centerpiece of asylum research.

The approach was cautiously optimistic. If insanity was a brain disease, it could potentially be studied the same way other diseases were studied, and perhaps cured through physical methods. One area of particular interest was general paralysis, a condition now known to be caused by syphilis infection reaching the brain. Researchers observed specialized cells in the brains of these patients that appeared to be fighting an infection, work that was happening at the same time that scientists elsewhere were discovering how white blood cells attack bacteria. This line of research pointed toward what would eventually be confirmed in the early 1900s: that at least some forms of mental illness had infectious origins.

This late-century shift didn’t replace moral and religious explanations overnight. For most of the population, the idea that mental illness reflected personal weakness, divine judgment, or bad heredity remained deeply ingrained. But within the medical profession, the 1800s marked the decisive turn toward understanding the mind through the brain, a framework that, for better and worse, still shapes psychiatry today.