Risk communication is the real-time exchange of information, advice, and opinions between experts or officials and people facing a threat to their health, safety, or wellbeing. The World Health Organization defines its core purpose simply: enabling people at risk to make informed decisions and take protective action. It applies to everything from disease outbreaks and chemical spills to food safety warnings and natural disasters. Unlike one-way public announcements, effective risk communication is a two-way process where the audience’s concerns, values, and emotions shape the message just as much as the science does.
Why People Don’t Perceive Risk the Way Experts Do
The gap between how experts measure a risk and how the public feels about it is the central challenge of the field. Risk communication scholar Peter Sandman captured this with a simple formula: risk, as the public experiences it, equals the actual hazard plus “outrage,” meaning the emotional and social reactions surrounding it. Hazard is what scientists measure: the probability of harm multiplied by its severity. Outrage is everything else, including whether the risk feels voluntary or imposed, familiar or exotic, fair or unjust.
This explains why people will calmly accept risks that kill far more people (like driving) while fiercely resisting risks that are statistically smaller (like a chemical plant in their neighborhood). The outrage isn’t irrational. It reflects real values: control, fairness, trust. Sandman’s model showed that public intolerance of certain risks often had little to do with the actual exposure levels and everything to do with whether people felt the risk was being forced on them or managed by institutions they didn’t trust. Any communicator who ignores the outrage side of the equation and leads only with data will fail to connect.
What Shapes How People Judge a Threat
Several psychological and social factors reliably shift how risky something feels. Understanding these helps explain why two people can look at the same data and reach opposite conclusions about whether they should be worried.
- Perceived benefit: When people believe a technology or activity benefits them personally or benefits society, they rate it as less risky. Research on nuclear power found that people who saw it as beneficial perceived significantly less danger than those who didn’t.
- Perceived cost: The higher someone estimates the personal or societal cost of something going wrong, the more dangerous it feels, even if the probability hasn’t changed.
- Knowledge and familiarity: The more someone understands about how a risk actually works, the lower their perception of danger tends to be. Unfamiliar risks feel scarier than familiar ones.
- Trust in institutions: When people trust the organization managing a risk, their sense of danger drops. When trust is broken, risk perception climbs regardless of what the data says.
- Social influence: People are heavily shaped by what their social networks think. If friends, family, or community leaders treat a risk as serious, individuals are more likely to do the same.
- Personality and openness: People with more innovative, novelty-seeking personalities tend to perceive new technologies as less threatening and are quicker to adopt them.
Demographics play a role too. Younger people generally perceive emerging technologies as less risky. Gender can moderate the relationship between perceived benefit and risk, with some studies finding the benefit-reduces-risk effect weakens in male-dominated samples. Cultural context matters as well: in societies where people defer more to authority, government endorsements carry more weight in shaping how a risk is perceived.
The Six Principles of Effective Risk Communication
The CDC’s Crisis and Emergency Risk Communication (CERC) framework lays out six principles that guide how organizations should communicate during emergencies. These aren’t abstract ideals. They reflect decades of real-world lessons from botched and successful public health responses.
Be first. In a crisis, the first source of information often becomes the preferred source. Delays create a vacuum that rumors and misinformation fill. Speed matters more than perfection in the early hours.
Be right. Accuracy builds credibility. This doesn’t mean waiting until every detail is confirmed. It means clearly stating what you know, what you don’t know, and what you’re doing to find out. Admitting uncertainty honestly is more credible than projecting false confidence.
Be credible. Honesty cannot be compromised during a crisis. One misleading statement can destroy trust that took years to build, and trust is nearly impossible to rebuild once lost.
Express empathy. People who are scared or suffering need to hear that their feelings are acknowledged before they can absorb practical advice. Skipping straight to data without recognizing harm makes communicators seem detached and untrustworthy.
Promote action. Giving people concrete, meaningful things to do reduces anxiety and restores a sense of control. Telling someone to “stay informed” is vague. Telling them to fill a prescription, check on an elderly neighbor, or seal their windows is specific and calming.
Show respect. People who feel vulnerable are especially sensitive to being talked down to or dismissed. Respectful communication promotes cooperation. Condescending communication breeds resistance.
Why Trust Is the Foundation
No risk message works if the audience doesn’t trust the messenger. Research on trust in risk communication identifies two distinct components that both need to be present. The first is trust in competence: does this person or organization actually know what they’re talking about? People judge competence based on track record. If past advice from a source turned out to be accurate and useful, trust in competence is high.
The second is trust in motives: does this person or organization share my values and have my interests at heart? People assess motives by comparing their own values against what they perceive the communicator’s values to be. A pharmaceutical company may have enormous technical competence, but if the public believes profit is the primary motive, trust in motives collapses, and the message fails regardless of how accurate it is.
This two-factor model explains a common frustration for experts. Being technically right is not enough. A local community leader with no scientific training can sometimes be more persuasive than a credentialed researcher, simply because the community trusts that leader’s motives. Effective risk communication strategies work with trusted voices rather than trying to override them.
How Messages Are Built for High-Stress Audiences
One of the most practical tools in risk communication is message mapping, a structured technique developed by the U.S. Environmental Protection Agency. The process starts by identifying every group that has a stake in the situation: the general public, affected communities, media, elected officials, industry groups. For each group, communicators list the specific questions and concerns they’re likely to have, then cluster those concerns into common themes.
The core rule of message mapping is the “rule of three”: each issue gets no more than three key messages. Each key message is then backed by supporting facts. The entire package is tested first with subject matter experts for accuracy, then with focus groups that represent the actual target audience. Finally, an overarching message map pulls together the organization’s most important points into what opens a press conference or public statement.
The constraints on message construction tighten dramatically when people are under stress. The EPA’s guidance calls for messages to be no longer than 27 words, deliverable in about 9 seconds. Language should be written at a sixth to eighth grade reading level. The reasoning is physiological: during stressful situations, people’s reading comprehension drops by roughly four grade levels from their baseline. Someone who normally reads at an eighth grade level processes information at a fourth grade level when frightened. Long sentences, technical jargon, and complex conditionals don’t just fail to persuade. They literally aren’t understood.
Risk Communication in a Social Media Environment
Social media has fundamentally changed the speed and complexity of risk communication. Information now spreads in minutes, and so does misinformation. Health agencies distinguish between three types of false information: misinformation (wrong but shared without malicious intent), disinformation (deliberately created and spread to deceive), and malinformation (based on real facts but stripped of context to mislead).
The term “infodemic” describes what happens when too much information, especially misleading information, floods the public during a health crisis. The COVID-19 pandemic made this concept impossible to ignore, as vaccine misinformation spread across platforms and directly contributed to hesitancy. Current guidance for health organizations emphasizes consistency across channels and genuine two-way engagement rather than simply broadcasting corrections. Partnering with trusted voices, including social media influencers who already have credibility with specific audiences, has become a standard recommendation for reaching communities that distrust official institutions.
The CERC principle of “be first” is even more critical in this environment. A vacuum of official information that might have lasted hours in the pre-internet era now lasts minutes before speculation fills it. Organizations that monitor social media in real time and respond quickly with clear, honest messaging have a measurable advantage over those that wait for formal approval processes to run their course.
Measuring Whether It Worked
Evaluating risk communication remains one of the field’s persistent challenges. Unlike marketing, where you can track clicks and conversions, the goal of risk communication is often behavioral: did people evacuate, did they get vaccinated, did they stop eating the contaminated food? The U.S. Department of Health and Human Services has identified the core elements of effective evaluation as tracking how messages moved through media, monitoring which audiences were actually exposed to them, and surveying those audiences about what they understood and what they did as a result.
Building this evaluation capacity before a crisis hits is essential. Organizations that set up message tracking, media monitoring, and audience surveys as part of their preparedness planning can identify what worked and what didn’t in near-real time, adjusting their approach while the crisis is still unfolding rather than only in retrospect.

