A breach in healthcare is the unauthorized access, use, or disclosure of protected health information (PHI) in a way that violates federal privacy rules. Under HIPAA, any time someone’s medical records, billing details, or other identifiable health data is exposed to people who shouldn’t have it, that exposure is legally presumed to be a breach unless the organization can prove otherwise. This applies to hospitals, doctor’s offices, insurance companies, and any third-party vendor that handles patient data.
The Legal Definition
Federal law defines a breach as “the acquisition, access, use, or disclosure of protected health information in a manner not permitted which compromises the security or privacy of the protected health information.” In practical terms, this covers a wide range of incidents: a hacker stealing a database of patient records, an employee snooping through a celebrity’s medical chart, a misdirected fax containing lab results, or a laptop full of patient data left in a car.
Protected health information includes anything that identifies a patient and relates to their health status, treatment, or payment for care. Names, dates of birth, Social Security numbers, diagnoses, prescription histories, and even IP addresses tied to a patient portal all count.
Three Exceptions That Don’t Count as Breaches
Not every accidental exposure qualifies. Federal regulations carve out three specific exceptions:
- Unintentional access by an authorized worker. If a nurse accidentally opens the wrong patient’s chart while doing their job, and they don’t share or misuse what they saw, it’s not a breach. The access has to be made in good faith, within the scope of their role.
- Inadvertent sharing between authorized colleagues. If one authorized employee accidentally discloses PHI to another authorized employee at the same organization, and the information goes no further, it falls outside the breach definition.
- Disclosure where retention is unlikely. If PHI is disclosed to an unauthorized person but the organization has a good faith belief that person couldn’t reasonably have kept or retained the information, it’s excluded. Think of a misdirected email that bounced back before being read.
How Organizations Assess Whether a Breach Occurred
When patient data is improperly accessed or disclosed, the law presumes it’s a breach. To overcome that presumption, the healthcare organization must conduct a risk assessment using four specific factors:
- What type of information was involved. A file containing names, Social Security numbers, and diagnoses is far more sensitive than one with only appointment dates. The more identifiers exposed, the higher the risk.
- Who received or accessed the data. A disclosure to another healthcare provider is less concerning than one to a stranger or a known bad actor.
- Whether the data was actually viewed. If a laptop was lost but recovered with no evidence anyone accessed the files, that lowers the probability of compromise.
- What was done to reduce the damage. Steps like recovering the data, getting a confidentiality agreement from the recipient, or confirming the information was destroyed all factor in.
Only if this assessment demonstrates a low probability that the data was compromised can the organization avoid treating the incident as a reportable breach. In all other cases, notification is required.
Who Gets Notified and When
Once a breach is confirmed, the healthcare organization must notify every affected individual, typically by letter. These notifications must describe what happened, what types of information were involved, what steps individuals can take to protect themselves, and what the organization is doing in response.
The size of the breach determines what happens next. When 500 or more people in a single state or jurisdiction are affected, the organization must also notify prominent media outlets serving that area. Breaches of this scale are simultaneously reported to the Department of Health and Human Services (HHS), which publishes them on a public database sometimes called the “Wall of Shame.” Smaller breaches affecting fewer than 500 individuals are logged with HHS annually rather than immediately.
Third-party vendors that handle patient data on behalf of a healthcare provider, known as business associates, have their own obligations. If a billing company, cloud storage provider, or IT contractor discovers a breach, they must notify the healthcare organization so the notification process can begin.
The Encryption Safe Harbor
One important protection exists for organizations that encrypt their data. If stolen or lost patient information was properly encrypted and the encryption key wasn’t also compromised, the data is considered “unsecured” under HIPAA’s framework and the incident doesn’t trigger breach notification requirements. This is sometimes called the encryption safe harbor, and it’s one of the strongest incentives for healthcare organizations to encrypt data both in storage and in transit. A stolen laptop with an encrypted hard drive is a very different situation from one with unprotected files.
Financial Penalties for Violations
Penalties for breaches vary based on how much the organization knew and whether it tried to fix the problem. The federal penalty structure has four tiers:
- Unknowing violations: $100 to $50,000 per violation, up to $25,000 per year for repeat violations
- Reasonable cause (should have known): $1,000 to $50,000 per violation, up to $100,000 per year
- Willful neglect, corrected in time: $10,000 to $50,000 per violation, up to $250,000 per year
- Willful neglect, not corrected: $50,000 per violation, up to $1.5 million per year
Because each affected patient record can count as a separate violation, a single incident involving thousands of records can result in penalties reaching into the millions. Beyond federal fines, state attorneys general can bring additional enforcement actions, and affected patients may pursue civil lawsuits.
What Happens if Your Data Is Breached
If you receive a breach notification letter from a healthcare provider or insurer, it means your personal health information was exposed. While there’s no federal law requiring healthcare organizations to offer credit monitoring after a breach, most large organizations do so voluntarily, typically providing 12 to 24 months of free monitoring as a goodwill measure and to reduce legal exposure.
Regardless of what the organization offers, you can take steps on your own. Place a fraud alert or credit freeze with the three major credit bureaus. Review your explanation of benefits statements from your insurer for any services you didn’t receive, which could indicate medical identity theft. Monitor your credit reports closely. All U.S. consumers can access free credit reports through annualcreditreport.com.
Medical identity theft is particularly insidious because it can insert incorrect information into your health records, potentially affecting your future care. If you suspect someone has used your identity to obtain medical services, you have the right to request copies of your medical records and ask for corrections to any inaccurate entries.

