Handwriting tests are done on drug names to catch dangerous similarities before a medication reaches the market. One in four medication errors reported in the United States involves confusion between drug names, and sloppy or ambiguous handwriting is a major contributor. Before the FDA approves a new drug’s brand name, the proposed name goes through a series of tests designed to find out whether it could be mistaken for an existing drug when scribbled on a prescription pad, read aloud over the phone, or selected from a dropdown menu.
How Drug Names Get Confused
Many drug names share letter patterns, similar lengths, or overlapping syllables. When a prescriber writes quickly, the difference between two names can collapse into illegibility. A pharmacist reading a rushed prescription for one medication might reasonably interpret it as another. If those two drugs treat completely different conditions or carry different dosing requirements, the patient receives the wrong treatment. These are called look-alike, sound-alike (LASA) drug name pairs, and safety organizations maintain evolving lists of the most problematic combinations.
The risk isn’t hypothetical. The Institute for Safe Medication Practices publishes a regularly updated list of confused drug name pairs drawn from real-world error reports. These pairs have caused actual mix-ups in hospitals, pharmacies, and clinics, sometimes with serious consequences.
What the FDA Requires Before Approval
The FDA evaluates every proposed brand name as part of the drug approval process. Under federal law, a drug is considered “misbranded” if its labeling is false or misleading, and a name that’s too easily confused with another product falls under that umbrella. The agency can refuse to approve an application on those grounds alone.
The review involves several layers of analysis. First, the FDA runs a preliminary screening to flag obvious conflicts. Then the proposed name goes through an orthographic and phonological similarity assessment, which examines how closely the name resembles existing drug names both in spelling and in pronunciation. The agency also searches drug databases and runs computational analyses to quantify how similar the new name is to everything already on the market.
Beyond the name itself, the FDA evaluates proposed container labels and packaging. If the product name is obscured by a logo, printed in an illegible font, or rendered in a color that reduces readability, those design choices could contribute to selection errors in real-world settings like a busy pharmacy shelf or a hospital supply cart.
How Handwriting Simulations Work
The most direct form of testing is the prescription simulation study. In these studies, participants are asked to use a proposed drug name in tasks that mirror what actually happens in clinical practice: handwriting a prescription, reading a handwritten order, calling in a verbal prescription, or selecting from an electronic list. The goal is to approximate real-world conditions as closely as possible, because a name that looks perfectly distinct in a typed document might become ambiguous when written by hand under time pressure.
For example, when the heartburn drug Aciphex was tested, simulation studies included handwritten medication orders, verbal prescriptions read aloud, and electronic orders displayed in standard hospital fonts. Each format can introduce different types of confusion. A handwritten “Aciphex” might look like a different drug entirely depending on the prescriber’s penmanship, while a verbal order might be misheard as a phonetically similar name.
These simulation tasks are designed to reflect the full chain of people who handle a drug name: the prescriber who writes or types it, the pharmacist or nurse who reads or hears it, and the technician who pulls the product from a shelf. Errors can happen at any link in that chain, so testing covers all of them.
Computer Tools That Score Name Similarity
The FDA also uses a software tool called the Phonetic and Orthographic Computer Analysis (POCA) program. This algorithm calculates how similar two drug names are based on both their spelling and their pronunciation. It generates a similarity score that helps reviewers identify names with a high likelihood of confusion, even if the overlap isn’t immediately obvious to a human reader scanning the names side by side.
POCA is particularly useful because the universe of approved drug names is enormous. A human reviewer might catch that a proposed name looks like one or two existing drugs, but the software can systematically compare it against every name in the database and flag matches that might otherwise slip through.
Human Factors Testing in Realistic Settings
Beyond simple handwriting legibility, the FDA requires what’s known as human factors testing for drug products, especially those with packaging that could be confused with similar items. These studies place healthcare workers in simulated clinical environments and observe whether they can reliably distinguish one product from another. The testing accounts for real-world pressures: noise, distractions, rapidly changing circumstances, and the kind of cognitive load that comes with managing multiple patients at once.
In one common scenario, nurses or pharmacists are asked to select the correct prefilled syringe from a group of similar-looking products. If participants pick the wrong syringe at a significant rate, the labeling or naming needs to be redesigned before the product can be approved. The FDA considers “select incorrect product” a critical task failure because it leads directly to a patient receiving the wrong drug.
Tall Man Lettering as a Safety Fix
When two drug names are already on the market and can’t be changed, one widely used intervention is Tall Man Lettering. This technique capitalizes the portions of each name that differ, making the visual distinction more obvious. For instance, if two names share the same first and last few letters but differ in the middle, those middle letters are printed in uppercase.
The approach works. In a simulation study with critical care nurses identifying syringe labels, the error rate dropped from 5.3% without Tall Man Lettering to 0.7% with it. That’s a shift from 8 wrong selections out of 150 attempts down to just 1. The study also found that Tall Man Lettering changed how nurses visually scanned the labels, drawing their attention to the distinguishing characters rather than letting them rely on a quick overall impression of the name’s shape.
Why This Still Matters in the Digital Age
Electronic prescribing has dramatically reduced handwriting-related errors. In a comparative study of 398 prescriptions, 35.7% of handwritten prescriptions contained errors compared to just 2.5% of electronic ones. Handwritten prescriptions were especially prone to missing dose information (12.1% vs. 1.0%) and missing the route of administration (15.1% vs. 0%).
But electronic systems haven’t eliminated the problem. Dropdown menus and autocomplete features introduce their own risks. A prescriber typing the first few letters of a drug name might accidentally select the wrong option from a list of similar-looking results. Sound-alike confusion persists whenever prescriptions are communicated verbally, whether over the phone or in a loud hospital unit. And handwritten prescriptions, while declining, are still used in some clinical settings. The fundamental challenge of distinguishing between similar drug names remains, which is why the FDA continues to require rigorous name testing regardless of how prescriptions are transmitted.

