What Is a Black Box in Planes, Medicine, and AI?

“Black box” has three common meanings depending on context: a severe safety warning on prescription medications, the crash-survivable recorders on aircraft, and a term for AI systems whose decision-making can’t be explained. The phrase always points to the same core idea: something important is happening inside, and you need to pay attention to it.

The FDA Black Box Warning on Medications

In medicine, a “black box warning” (officially called a “boxed warning”) is the strongest safety alert the FDA can place on a prescription drug. It gets its nickname from the bold black border printed around the warning text on the drug’s label and prescribing information. A medication earns this label when clinical evidence shows it carries a risk of serious injury or death that doctors and patients need to weigh carefully before use.

Not every side effect qualifies. The FDA reserves boxed warnings for risks that are life-threatening, that could cause permanent disability, or that require specific monitoring or restricted use to keep patients safe. A drug can receive a boxed warning at any point in its lifecycle, sometimes years after it first hits the market, if new safety data emerges. The warning doesn’t mean the drug is banned or pulled from shelves. It means the benefits may still outweigh the risks for some patients, but everyone involved in the prescribing decision needs to be fully aware of the danger.

Common Drugs With Boxed Warnings

Two of the most widely recognized boxed warnings apply to medications millions of people take. In 2004, the FDA required antidepressants, particularly SSRIs, to carry a boxed warning about an increased risk of suicidal thoughts and behavior in children and young adults. That decision remains controversial, with some researchers arguing the warning discouraged prescribing so much that it may have caused more harm than the risk it flagged.

Non-aspirin NSAIDs, the class of painkillers that includes ibuprofen and naproxen, carry a boxed warning for cardiovascular risk. The warning states that these drugs increase the risk of heart attack and stroke, either of which can be fatal. The risk can appear as early as the first weeks of use, climbs with longer use and higher doses, and applies even to people with no history of heart disease. Studies estimate the increased risk ranges from 10 to 50 percent or more, depending on the specific drug and dose. Patients who take NSAIDs after a first heart attack are more likely to die in the following year compared to those who don’t.

The Aviation Black Box

In aviation, a “black box” refers to the flight recorders carried on commercial aircraft. Despite the name, they’re typically bright orange to make them easier to find in wreckage. Every commercial plane carries two: a flight data recorder that logs hundreds of parameters like altitude, airspeed, and engine performance, and a cockpit voice recorder that captures audio from microphones in the flight deck.

These devices are built to survive extreme conditions. They’re engineered to withstand high-impact crashes, intense fire, and deep ocean pressure. Even so, they have limits. In at least one notable investigation, a flight data recorder’s magnetic recording medium was destroyed in a post-crash fire, despite meeting the applicable durability standards. Those standards are designed for fires of medium or low intensity that burn over a long period, but exceptionally severe fires can still overwhelm them.

Cockpit voice recordings come with their own challenges. Conversations picked up by omnidirectional microphones can be barely intelligible, requiring extensive analysis to decode. Aviation safety bodies have recommended that crews use individual microphones, particularly during takeoff and landing, to improve recording clarity. There have also been calls to add video recording of instrument panels and the flight deck, synchronized with the audio and flight data, though this remains a subject of debate around crew privacy.

Black Box AI and Why It Matters

In technology, “black box” describes any system where you can see what goes in and what comes out, but you can’t understand how it arrived at its answer. The term has become especially important in medicine, where AI tools now help diagnose diseases, recommend treatments, and flag abnormalities in medical imaging.

The problem is straightforward: if an AI recommends a treatment and no one, not the patient, not the doctor, not even the engineers who built it, can explain why, that creates real ethical and practical concerns. A patient can’t meaningfully consent to a course of action if the reasoning behind it is invisible. A doctor can’t exercise professional judgment if the tool’s logic is opaque. And if the AI makes a mistake, it’s harder to catch, harder to correct, and harder to assign responsibility.

The FDA has begun addressing this directly. Its guidance on machine learning in medical devices treats transparency as a core principle, defining it as the degree to which a device’s intended use, development, performance, and logic are clearly communicated to everyone involved. “Logic” in this context means information about how the device reached its output. “Explainability” is the degree to which that logic can be described in a way a person actually understands. The FDA’s framework calls for developers to share information about a device’s benefits, risks, known biases, confidence intervals, data gaps, and limitations. It also emphasizes that transparency isn’t just for doctors. It’s relevant to anyone involved in a patient’s care, including patients themselves.

Beyond clinical accuracy, researchers have raised concerns about the psychological and financial burdens that unexplainable AI systems can create. A patient told they need an expensive or invasive procedure based on a recommendation no one can explain faces a uniquely stressful situation, one that older ethical frameworks around medical technology didn’t anticipate.

The Common Thread

Across all three uses, “black box” signals a gap between what’s happening and what people can see or understand. On a medication label, it’s a warning that a serious risk exists beneath the surface of an otherwise helpful drug. On an aircraft, it’s a sealed record of what happened when no one else could observe. In AI, it’s a system making consequential decisions through a process no human can follow. The phrase endures because the tension it describes, between hidden processes and the need for transparency, keeps showing up in new forms.