The PRECEDE-PROCEED model is a structured framework used in public health to plan, implement, and evaluate health programs. Proposed by Lawrence Green and Marshall Kreuter, it guides practitioners through a series of assessments that start with the end goal and work backward to figure out what needs to change, why, and how. The model is split into two halves: PRECEDE covers planning (four phases of diagnosis and assessment), while PROCEED covers execution and evaluation (four phases of implementation and measurement).
What makes this model distinctive is its logic. Instead of starting with a program idea and hoping it works, you start by identifying the quality-of-life outcomes a community actually wants, then trace backward through the health problems, behaviors, environments, and root causes that stand in the way. Only after that diagnostic work do you design an intervention. The PROCEED phases then test whether that intervention did what it was supposed to do.
How the Backward Logic Works
Most people approach a health problem by jumping straight to a solution: launch a campaign, distribute materials, run a workshop. The PRECEDE-PROCEED model forces you to slow down and ask a chain of “why” questions before you commit to any action. You begin at the desired outcome (better health, better quality of life) and work backward through layers of contributing factors until you arrive at the specific levers your program can realistically pull.
Think of it like diagnosing a car that won’t start. You don’t immediately replace the engine. You check the battery, the starter, the fuel line, working backward from the symptom to the cause. PRECEDE does the same thing for health problems: it maps the chain from quality of life all the way down to the knowledge gaps, missing resources, and policy barriers that a well-designed program can address.
The Four PRECEDE Phases
Phase 1: Social Assessment
The first phase asks a deceptively simple question: what do the people in this community actually want? Rather than assuming you know the problem, you engage with the population to identify their priorities and their vision of a good quality of life. This might involve surveys, focus groups, or conversations with community leaders. The goal is to define the ultimate desired result from the community’s own perspective, not from an outside expert’s assumptions.
Phase 2: Epidemiological, Behavioral, and Environmental Assessment
Once you know the desired outcome, Phase 2 identifies the health issues and conditions standing in the way. You examine epidemiological data (disease rates, mortality, morbidity), then pinpoint the specific behaviors and environmental factors linked to those problems. For example, if a community’s top concern is children missing school, Phase 2 might reveal that untreated asthma is a primary driver, that families aren’t using preventive medications consistently (a behavioral factor), and that housing conditions include mold exposure (an environmental factor). This phase produces your program’s mediating outcomes: the things you need to change in order to achieve the bigger goal identified in Phase 1.
Phase 3: Educational and Ecological Assessment
This is where the model gets granular. Phase 3 asks: what factors, if modified, would most likely produce and sustain the behavior changes identified in Phase 2? It sorts these factors into three categories:
- Predisposing factors are the internal conditions that exist before a behavior happens. These include knowledge, beliefs, values, attitudes, and self-efficacy. A person who doesn’t know that preventive asthma medication needs to be taken daily, or who doesn’t believe it works, has a predisposing barrier.
- Enabling factors are the external conditions that make a behavior possible once someone is motivated. These include access to resources, new skills, available services, and programs. If the medication is too expensive or the nearest pharmacy is 30 miles away, that’s an enabling barrier.
- Reinforcing factors are the rewards and social responses that come after a behavior, making it more likely to continue. Social support from family, praise from a healthcare provider, or visible improvement in a child’s health all reinforce the decision to keep using the medication.
This three-part breakdown is one of the model’s most practical contributions. It prevents programs from relying solely on education (predisposing factors) while ignoring access problems (enabling factors) or the social environment (reinforcing factors). A well-designed intervention targets all three.
Phase 4: Administrative and Policy Assessment
Phase 4 shifts from “what should change” to “what can we actually do given our constraints.” Here you assess the organizational capacity, budget, staffing, timeline, and existing policies that will shape your intervention. You also identify policy changes that might be needed to support the program. A school-based asthma program, for instance, might require a policy allowing nurses to administer medication on-site. This phase is the bridge between planning and action, aligning the ideal intervention with real-world limitations.
The Four PROCEED Phases
Once the intervention launches, PROCEED provides a structured way to evaluate whether it’s working, and at what level.
Phase 5: Implementation
This is the actual rollout of the program designed through the PRECEDE phases. The emphasis here is on fidelity: is the program being delivered as planned? Data collection systems should already be in place before the program starts, so you can track what’s happening from day one.
Phase 6: Process Evaluation
Process evaluation examines the mechanics of delivery. Did the intended audience actually participate? Were materials distributed on schedule? Did staff follow the protocol? This phase catches problems early. If attendance at a workshop series drops by half after the second session, process evaluation flags that before you waste months wondering why the program didn’t produce results.
Phase 7: Impact Evaluation
Impact evaluation measures whether the predisposing, enabling, and reinforcing factors from Phase 3 actually changed, along with the behavioral and environmental factors from Phase 2. Did participants’ knowledge improve? Did they gain access to new resources? Did the target behavior shift? This is the intermediate check: the program may not have changed long-term health outcomes yet, but if it changed the factors known to drive those outcomes, it’s on the right track.
Phase 8: Outcome Evaluation
Outcome evaluation looks at the big picture. Did the health indicators identified in Phase 2 improve? Did the quality-of-life goals from Phase 1 move in the right direction? This typically requires the longest follow-up period, since changes in disease rates and community well-being take time to materialize. A diabetes prevention program, for example, might show behavior change (impact) within six months but need two or three years to demonstrate reductions in new diagnoses (outcome).
Why It’s Widely Used
The PRECEDE-PROCEED model has been applied across a broad range of public health contexts, from oral health strategies for people with disabilities to chronic disease management and health technology adoption. A systematic review in the Iranian Journal of Public Health confirmed its continued use as a tool for planning, implementation, and evaluation of health interventions globally.
Its staying power comes from a few practical strengths. First, the backward-planning logic forces program designers to justify every component of their intervention with evidence, rather than defaulting to familiar approaches. Second, the three-category breakdown of predisposing, enabling, and reinforcing factors gives teams a concrete checklist for ensuring their program addresses behavior change from multiple angles. Third, the built-in evaluation phases mean that measurement isn’t an afterthought bolted on at the end. It’s baked into the framework from the start.
The model does require significant time and resources for the diagnostic phases, which can be a barrier for organizations under pressure to launch programs quickly. But the trade-off is that programs designed with this level of rigor are far less likely to target the wrong problem or miss a critical barrier, saving time and money in the long run.

