How Were Mental Illnesses Treated in the 1900s?

Mental illness treatment in the 1900s ranged from prolonged baths and forced institutionalization to lobotomies, insulin-induced comas, and eventually the first psychiatric medications. The century saw dramatic shifts: from warehousing patients in overcrowded asylums with almost no effective treatment, to a pharmacological revolution that emptied those same institutions. Some of these approaches helped patients. Many caused serious harm.

Commitment Was Easy, and Rights Were Few

For most of the early 1900s, getting someone locked in a psychiatric institution required little more than a doctor’s recommendation and a finding of mental illness. There were no meaningful procedural barriers between a person and the doors of an asylum. Patients were presumed incapable of making their own decisions, and commitment was justified under a legal doctrine called parens patriae, the idea that the government had an obligation to care for people who couldn’t care for themselves. In practice, this meant families, neighbors, or local officials could have someone institutionalized with minimal oversight and no hearing.

Once inside, patients had no guaranteed right to refuse treatment. The system operated on the assumption that doctors knew best, and that confinement itself was therapeutic. This framework stayed largely intact until civil rights reforms began reshaping commitment laws in the 1960s and 1970s.

Life Inside the Asylum

State mental hospitals were the primary setting for psychiatric care through most of the century. By 1955, the U.S. had 558,239 severely mentally ill patients housed in public psychiatric hospitals, the highest number ever recorded. These institutions were often self-contained worlds. Many operated their own farms where patients in recovery stages worked the land, tended livestock, and supplied the hospital kitchens. At one Welsh hospital, 40 patients and nine staff cultivated over 80 acres of gardens and farmland in its first year alone.

Daily life wasn’t purely grim. By the mid-1930s, mental hospitals across England and Wales had cinemas, hosted weekly dances, and organized sports clubs as part of an effort to make occupation and entertainment central to rehabilitation. Hospitals ran their own sports teams, education departments, and art and music classes. Wards sometimes had fresh flowers, pet canaries, and a rotating supply of library books. Saturday evening film screenings, Christmas pantomimes, and annual New Year’s Eve dances were regular features at some facilities.

But these brighter moments coexisted with overcrowding, understaffing, and treatments that would later be considered cruel. The gap between the best and worst institutions was enormous, and many patients spent years or even decades confined with little prospect of discharge.

Hydrotherapy and Restraint

In the first decade of the 1900s, prolonged bath treatment became a standard therapy for agitated patients. The idea was simple: immerse a restless patient in lukewarm water and keep them there until they calmed down. “Hours or days at a stretch” was not unusual. Patients were sometimes wrapped in wet sheets or placed in continuous-flow baths that kept the water at a constant temperature. This was considered a humane alternative to mechanical restraints like straitjackets and manacles, which had dominated the previous century. Whether it felt humane to the patients is another question.

Insulin Coma Therapy

In the 1930s, a Viennese physician named Manfred Sakel introduced one of the era’s most dramatic treatments: deliberately injecting patients with enough insulin to send them into a hypoglycemic coma. The target was schizophrenia. A typical course involved inducing a coma every day for roughly 30 consecutive days, with Sundays off. Each session meant pushing the patient’s blood sugar dangerously low, waiting while they were unconscious, then reviving them with glucose.

Sakel’s early reports were strikingly optimistic. He claimed 88 percent of patients with recent-onset schizophrenia improved, with 70 percent making a full recovery. Other studies seemed to confirm the pattern: one Canadian report found 82 percent of insulin-treated patients were discharged compared to 47 percent of untreated patients from an earlier period. A British study reported an 88 percent discharge rate among insulin-treated patients versus 48 percent before insulin was available.

These numbers look impressive, but they were deeply flawed. The studies lacked proper controls, and the comparison groups came from different time periods with different standards for discharge. When more rigorous follow-up was conducted later, the results collapsed. One American study of 393 patients found only 34 percent were discharged, and just 6 percent showed full recovery after an average of 3.3 years. Insulin coma therapy carried real risks, including brain damage and death, and was eventually abandoned as evidence mounted that it was no better than doing nothing.

Electroconvulsive Therapy

In April 1938, Italian physicians Ugo Cerletti and Lucio Bini applied electrical current to a human brain for the first time as a psychiatric treatment. Their method involved placing two electrodes on a patient’s temples, measuring the head’s electrical resistance, then delivering a calibrated shock to trigger a seizure. They settled on 125 volts for one second as a safe starting point.

Early electroconvulsive therapy was a rough experience. Patients received no anesthesia or muscle relaxants, which meant the seizures were violent enough to sometimes fracture bones or dislocate joints. Despite this, ECT spread rapidly through psychiatric hospitals worldwide because it appeared to help patients with severe depression and psychosis when nothing else did. Over the following decades, the addition of anesthesia and muscle relaxants made the procedure far safer. Unlike insulin coma therapy or lobotomy, ECT survived the century. It remains in clinical use today in a substantially modified form.

The Lobotomy Era

No treatment from the 1900s carries a darker reputation than the lobotomy. The procedure involved severing connections in the brain’s frontal lobes, the area responsible for personality, planning, and emotional regulation. The logic was that cutting these connections would calm patients who were severely agitated or psychotic.

The most aggressive promoter was Walter Freeman, an American neurologist who developed the transorbital lobotomy in 1945. His streamlined technique used a pick-like instrument forced through the back of the eye socket, piercing the thin bone separating the eye sockets from the frontal lobes. The pick’s point was then inserted into the brain and used to sever neural connections. Freeman performed or supervised more than 3,500 lobotomies by the late 1960s, sometimes conducting them in his office without a surgeon present.

Lobotomies were performed on a wide scale during the 1940s and 1950s. Some patients did become calmer and were discharged from hospitals. But many were left with permanent personality changes, emotional flatness, intellectual impairment, or worse. The procedure fell out of favor as psychiatric medications became available in the mid-1950s, and it is now considered one of the most harmful chapters in medical history.

Psychoanalysis and Talk Therapy

While asylums relied on physical interventions, a parallel tradition of talk-based treatment was growing outside hospital walls. Sigmund Freud’s psychoanalytic ideas gained enormous influence in the early decades of the century, particularly in private practice. Freud himself advocated for making therapy accessible beyond the wealthy, writing in 1918 that free clinics should be established so that people “who would otherwise give way to drink” or children “for whom there is no choice but running wild or neurosis” could receive analysis.

Between 1920 and 1938, psychoanalysts created outpatient centers offering free mental health care in ten different cities across Europe. These clinics developed innovations including child analysis, short-term therapy, and crisis intervention. But psychoanalysis remained largely a treatment for the educated and affluent. In public institutions, where the most severely ill patients were housed, talk therapy played little role. The divide between what happened in a private analyst’s office and what happened behind asylum walls was vast.

The First Psychiatric Medications

The single biggest turning point of the century came in the early 1950s with the discovery of chlorpromazine, the first effective antipsychotic drug. For the first time, doctors had a medication that could reduce hallucinations, delusions, and severe agitation without rendering patients unconscious or destroying brain tissue.

The impact on psychiatric hospitals was immediate and enormous. In 1953, the U.S. state hospital population hit its peak at roughly 560,000 patients. By 1975, that number had dropped by two-thirds to 193,000. Chlorpromazine didn’t just treat symptoms; it made discharge possible for hundreds of thousands of people who had been considered permanently institutionalized. It also triggered the development of other psychiatric medications, including antidepressants and anti-anxiety drugs, that reshaped mental health care for the rest of the century.

Classifying Mental Illness

For much of the early 1900s, there was no standardized system for diagnosing mental illness. Different hospitals used different terms, and two doctors examining the same patient might reach entirely different conclusions. The first attempt at a shared framework came in 1952 with the publication of the Diagnostic and Statistical Manual of Mental Disorders, developed using a classification system the U.S. Army had created during World War II. That first edition listed 106 diagnoses across 130 pages. It was a starting point, not a definitive guide, but it marked the beginning of efforts to make psychiatric diagnosis more consistent and less dependent on individual physicians’ personal theories.

Deinstitutionalization

The combination of new medications, changing legal standards, and growing public outrage over asylum conditions fueled a massive shift in the second half of the century. States began closing or downsizing their psychiatric hospitals, a process known as deinstitutionalization. The goal was to move patients into community-based care: outpatient clinics, halfway houses, and local mental health centers.

The results were mixed. Many patients who left hospitals did well on medication and with outpatient support. But the community services that were supposed to replace institutions were chronically underfunded. Large numbers of discharged patients ended up homeless, incarcerated, or cycling through emergency rooms. One PBS report described deinstitutionalization as a psychiatric “Titanic.” The 1900s ended with mental health care in a state of transition: the old asylum system was largely dismantled, effective medications existed, but the infrastructure to support people living with mental illness in the community remained inadequate.