What Is Process Evaluation in Public Health?

Process evaluation is a type of program evaluation that examines whether a public health intervention is being delivered as planned. Rather than asking “Did the program work?” it asks “Is the program actually running the way it’s supposed to?” This distinction matters enormously, because a program that looks like a failure might simply have never been implemented correctly in the first place.

How Process Evaluation Differs From Other Evaluations

Public health programs typically face three big evaluation questions, each answered by a different type of evaluation. Process evaluation asks whether the program is operating as expected. Outcome evaluation asks whether the program is producing the results it aimed for. Impact evaluation asks whether those results are better than what would have happened without the program at all.

These evaluations also happen at different stages of a program’s life. Process evaluation comes first, often beginning during or even before implementation. Outcome evaluation follows once the program is mature enough to measure results. Impact evaluation requires the most mature program and the most rigorous study design, since it tries to isolate the program’s effect from everything else happening in participants’ lives.

The practical difference is straightforward. If a youth mentoring program aims to reduce foster care stays, a process evaluation would ask: “Did we actually serve the population we intended to serve? Why or why not?” An outcome evaluation would ask: “Did participants leave foster care within two years?” An impact evaluation would ask: “Did our program reduce time in foster care compared to what would have happened without it?”

Why It Prevents a Common Evaluation Mistake

One of the most important functions of process evaluation is preventing what researchers call a Type III error: getting the right answer to the wrong question. Imagine spending years and significant funding to determine whether a nutrition program improved health outcomes, only to conclude it didn’t work. Without process evaluation, you might scrap the entire approach. But what if the program was never fully delivered? What if half the sessions were skipped, the target population never showed up, or staff deviated from the curriculum?

Process evaluation catches these problems in real time. It tells you whether a negative result means the program concept failed or whether the implementation failed. That distinction can save effective programs from being abandoned and redirect resources toward fixing delivery rather than redesigning from scratch.

The Five Core Dimensions

Process evaluation typically measures five specific dimensions of implementation.

  • Fidelity: The extent to which the program was delivered according to its original design. This includes whether staff followed the protocol, used the right materials, and maintained the intended structure. If a smoking cessation program calls for six counseling sessions with motivational interviewing techniques, fidelity measures whether facilitators actually used those techniques in all six sessions.
  • Dose delivered: The amount of program content that was actually provided to participants. A workplace health program might plan for 12 hours of face-to-face contact over 12 months, broken into health assessments and education sessions. Dose delivered tracks whether all of that content was offered.
  • Dose received: What participants actually absorbed or engaged with. This is the flip side of dose delivered. In one trucking industry health study, for example, participants received an average of about 20 text messages each but responded to fewer than 4, yielding a response rate under 19%. The program delivered the content, but engagement was low.
  • Reach: The proportion of the intended audience that actually participated. This includes how many people enrolled, how many completed each phase, and whether the program attracted the specific population it was designed for.
  • Recruitment: The strategies used to attract and retain participants, and how well those strategies worked across different sites or populations.

A newer framework adds two more dimensions: quality of delivery (how well facilitators conveyed the content, beyond just following the script) and program differentiation (whether the program was distinct enough from other services participants might have received to attribute any changes to it specifically).

How Data Is Collected

Process evaluation draws on both quantitative and qualitative methods, often combining several at once.

On the quantitative side, the workhorses are attendance records, participation logs, and program records. These generate the counts that form your output indicators: number of sessions delivered, number of people served, participation rates. A logic model, which is a visual map of how a program’s resources and activities are supposed to lead to results, helps evaluators identify exactly which outputs to track. Typical outputs include the number of activities completed, new materials developed, and the number of children, families, or staff involved.

Qualitative methods fill in what numbers cannot. Interviews with staff and participants reveal how the program was experienced, what barriers people encountered, and why certain components worked or didn’t. Focus groups, direct observation of program sessions, and document review are all common. Observation is particularly valuable for measuring fidelity, since self-reported checklists completed by program staff can overestimate how closely they followed the protocol.

Fidelity measurement itself can be direct (someone watches the session and scores it against a checklist) or indirect (facilitators fill out their own reports). Direct observation is more accurate but harder to scale, so many community-based programs rely on a combination. There is no single standardized fidelity tool that works across all public health programs. Most evaluators develop custom measures tailored to their program’s core components, guided by established frameworks for what fidelity should capture.

When Process Evaluation Happens

The CDC’s framework for program evaluation identifies three stages where evaluation plays a role: planning, implementation, and effects. Process evaluation is most closely associated with implementation, but starting during the planning phase makes it far more effective.

During planning, evaluation activities help refine the program design before anything is delivered. You identify your stakeholders, describe the program clearly, decide which evaluation questions matter most, and determine what evidence you’ll need to collect. During implementation, evaluation shifts to characterizing what’s actually happening on the ground versus what was planned, and using that information to improve operations in real time.

The key principle is that evaluation should not be saved for the end of a program’s funding cycle. Programs that build process evaluation into their operations from day one can course-correct as problems emerge rather than discovering delivery failures after the money has been spent.

What It Looks Like in Practice

A national process evaluation of COVID-19 vaccination centers in Lebanon illustrates how granular this work can get. Evaluators visited 33 vaccination sites and assessed each step of the immunization process against international standards, generating specific compliance indicators for vaccine transportation, storage, preparation, and administration.

The findings were highly actionable. Vaccine transportation scored 100% compliance across all sites. But storage revealed problems: 9% of centers stored vaccines outside the recommended temperature range, 27% monitored refrigerator temperatures only once daily instead of continuously, and 45% stored other items in the same refrigerator as vaccine vials. During preparation, 30% of centers did not verify vial labels and expiration dates before drawing doses, and nearly 40% did not document the time of vaccine preparation. In pre-vaccination areas, 39% of centers had congested waiting rooms where physical distancing was not maintained.

None of these findings tell you whether the vaccines “worked.” That’s an outcome question. But they tell you exactly where the delivery chain is breaking down and where to focus quality improvement, which is precisely the point of process evaluation.

Process Evaluation for Digital Health Tools

As public health increasingly delivers interventions through apps, websites, and telehealth platforms, process evaluation has expanded to include digital-specific metrics. These include actual system usage data (how often people log in, which features they use, how long they stay), task completion rates, user experience scores, and technical performance measures like load times and error rates.

The challenge is that there is no consensus yet on standard methods or indicators for evaluating digital health interventions. Evaluators use a wide variety of approaches, from analyzing system logs to conducting think-aloud sessions where users narrate their experience navigating a tool. The variety of methods reflects the reality that digital tools create new dimensions of “dose received” and “fidelity” that traditional frameworks were not designed to capture. Whether someone opens an app is not the same as whether they meaningfully engage with its content.