An attack rate is the percentage of a defined group of people who get sick during a specific time period, usually an outbreak. If 20 out of 100 people at a wedding reception develop food poisoning, the attack rate is 20%. Despite the name, it’s technically not a “rate” in the strict mathematical sense. It’s a proportion, which is why epidemiologists also call it incidence proportion, risk, or cumulative incidence.
How It’s Calculated
The formula is straightforward: divide the number of new cases by the size of the population at the start of the time period, then multiply by 100 to get a percentage.
The key detail is that the denominator is the population “at risk,” meaning people who were disease-free at the start and had a realistic chance of being exposed. At a company picnic, that’s everyone who attended. In a school outbreak, it’s all the enrolled students. You wouldn’t count someone who was already sick before the outbreak began, and you wouldn’t count people who were never present.
Why It Matters in Outbreak Investigations
Attack rates are one of the most practical tools for tracing the source of an outbreak, especially foodborne ones. Investigators calculate a separate attack rate for every food item served, comparing the rate among people who ate a particular food against the rate among people who didn’t. A strong suspect has three characteristics: a high attack rate among those who ate it, a low attack rate among those who didn’t, and it accounts for most of the cases.
A real-world example makes this concrete. In one foodborne outbreak analysis, investigators calculated attack rates for every dish at a meal. Most foods showed similar illness rates whether people ate them or not. Baked ham, for instance, had a 63% attack rate among those who ate it and 59% among those who didn’t, a negligible difference. Vanilla ice cream told a completely different story: 79.6% of people who ate it got sick, compared to just 14.3% of those who didn’t. That enormous gap, a risk ratio of 5.57, identified the ice cream as the culprit.
This comparison technique is the backbone of cohort studies during outbreak investigations. Without it, investigators would be guessing.
Food-Specific and Group-Specific Rates
Attack rates can be sliced in different ways depending on what you’re investigating. A crude attack rate covers everyone in the affected population, giving you the big picture. Food-specific attack rates, like the ice cream example above, narrow the calculation to people who ate or didn’t eat a particular item. You can also calculate group-specific attack rates by age, sex, dormitory wing, vaccination status, or any other characteristic that might explain why some people got sick and others didn’t.
In a tuberculosis outbreak at a South Carolina prison, for example, investigators compared attack rates by dormitory location. Of 157 inmates on the East wing, 28 developed tuberculosis (an attack rate of about 18%), compared with 4 of 137 on the West wing (about 3%). That difference pointed investigators toward the East wing as the focal point of transmission.
Secondary Attack Rate
A secondary attack rate measures how easily a disease spreads from person to person within a close group, typically a household. Instead of looking at a whole event or community, it zooms in on the people who were in close contact with a known case and asks: what proportion of them also got sick?
The denominator here excludes the first (index) case and only counts the susceptible contacts. If someone brings the flu home and three of four other family members catch it, the household secondary attack rate is 75%.
This measure became especially visible during the COVID-19 pandemic. Research published in the CDC’s Emerging Infectious Diseases journal found that household secondary attack rates for SARS-CoV-2 were 58.2% during the period when the Delta variant dominated and jumped to 80.9% when Omicron took over. Numbers like these help public health officials gauge how contagious a new variant really is in everyday settings and shape decisions about isolation guidelines.
Attack Rate vs. Incidence Rate
These two terms sound similar but serve different purposes. An attack rate applies to a specific, well-defined group over a short, contained time period, like attendees at a conference or residents of a single neighborhood during a two-week outbreak. An incidence rate measures new cases in a broader population (a city, a country) over a longer time frame, such as months or years.
The practical difference: you’d use an attack rate to describe what happened at a particular event or during a particular outbreak. You’d use an incidence rate to track how common a disease is across a region over time. Attack rates are snapshots. Incidence rates are trend lines.
How Attack Rates Guide Decisions
Beyond pinpointing the source of an outbreak, attack rates help public health teams decide how to respond. A high attack rate signals that a large fraction of exposed people are getting sick, which might call for aggressive containment measures. Comparing attack rates between vaccinated and unvaccinated groups reveals how well a vaccine protects in real-world conditions. In the chickenpox outbreak in Oregon in 2002, 18 of 152 vaccinated children developed the disease (about 12%) compared with 3 of 7 unvaccinated children (about 43%), a risk ratio that demonstrated the vaccine’s protective effect even when breakthrough infections occurred.
Attack rates also communicate risk in a way the public can immediately grasp. Telling people that “80% of household contacts got infected” is far more intuitive than quoting an R-naught value or an odds ratio. That clarity is why the term persists in public health communication, even though statisticians would prefer the more precise label “incidence proportion.”

