What Is PDSA in Quality Improvement and How Does It Work?

PDSA stands for Plan-Do-Study-Act, a four-stage cycle used to test and refine changes in a structured, repeatable way. It is the most widely used framework in healthcare quality improvement, though it originated in manufacturing. The core idea is simple: instead of rolling out a big change all at once, you test a small change, learn from the results, and adjust before trying again.

The Institute for Healthcare Improvement (IHI) positions PDSA as the engine of its broader Model for Improvement, which pairs the cycle with three guiding questions: What are we trying to accomplish? How will we know that a change is an improvement? What change can we make that will result in improvement? Those questions set the direction. PDSA cycles do the actual work of figuring out whether a change delivers.

Where PDSA Came From

The cycle traces back to Walter Shewhart at Bell Laboratories, who developed an early version focused on manufacturing processes. His student, W. Edwards Deming, refined and popularized it. Deming deliberately chose the word “Study” over “Check,” which appears in the older PDCA (Plan-Do-Check-Act) variant. The distinction matters: “Check” implies a pass/fail judgment on whether a plan worked. “Study” emphasizes learning. Deming wanted teams to predict what would happen, observe what actually happened, and compare the two so they could revise their theory of how the process works. That learning orientation is what makes PDSA function more like a scientific method than a simple checklist.

The Four Stages

Plan

In the Plan stage, you define what you’re testing and what you expect to happen. This means stating a specific, testable question, deciding what data you’ll collect, assigning who will collect it, and predicting the outcome. The prediction is important because it forces the team to articulate their assumptions. For example, a clinic might hypothesize that a reminder phone call will increase the return rate of home screening kits. The plan should also document the who, what, where, when, and how of data collection so that the results are meaningful later.

Do

The Do stage is where you carry out the test, preferably on a small scale. Deming’s original guidance was explicit about keeping the scope narrow. You might test a change with one provider, one shift, or one patient population before expanding. During this stage, you execute the plan exactly as designed, collect the data you committed to collecting, and document any problems or unexpected observations. If something surprising happens during the test, write it down. Those surprises often contain the most useful information.

A practical tip from IHI: avoid letting technical delays stall the cycle. If new software isn’t ready, track measurements with paper and pencil. The point is to keep learning, not to build a perfect system on the first pass.

Study

This is the stage Deming cared most about. You compare what actually happened against what you predicted. The goal is not to assign a binary pass or fail. Even when results don’t meet your target, the change may still show improvement over baseline. One quality improvement team, for instance, found that reallocating a quarter of a staff member’s time to a different workstation improved their success rate for meeting a turnaround-time target from 51% to 69%. That didn’t hit their goal, but it revealed a meaningful trend worth building on.

Run charts, which plot data points over time, are the most commonly used visual tool in this stage. They make it easy to spot trends and compare results across successive cycles. Control charts (for detecting whether a process is stable) and Pareto charts (for identifying the most frequent problems) are also useful depending on what you’re measuring. The key is to look for patterns, not just outcomes.

Act

Based on what you learned, you make one of three decisions: adopt the change, adapt it, or abandon it. Adopting means the change worked well enough to implement more broadly. Adapting means the change showed promise but needs modification, which sends you into another PDSA cycle with a revised plan. Abandoning means the change didn’t work and you need a different approach entirely. In practice, most cycles lead to adaptation rather than wholesale adoption or abandonment.

Why It Works in Cycles

A single PDSA cycle rarely solves a complex problem. The method is designed to be iterative. You start small, learn something, adjust, and test again. Each cycle builds knowledge that the previous one didn’t have. A team might run three or four cycles over a few weeks, each one refining the intervention based on what the last cycle revealed. This is sometimes called “rapid-cycle testing,” and it’s the reason PDSA can adapt to messy, real-world environments where conditions change and people behave unpredictably.

The sequential nature also lowers risk. Testing a change with five patients before rolling it out to five hundred means failures are small and reversible. By the time you scale up, you’ve already worked out most of the problems.

PDSA vs. PDCA

You’ll see both acronyms used, sometimes interchangeably. They share the same structure, but Deming drew a clear line between them. PDCA’s “Check” step asks whether the plan succeeded. PDSA’s “Study” step asks what you learned and why. In PDCA, a failed plan leads to corrections. In PDSA, a failed prediction leads to revised understanding. The practical difference is that PDSA encourages teams to update their mental model of how a process works, not just fix what went wrong. Deming felt strongly enough about this distinction that he spent years publicly clarifying it.

In Lean Six Sigma and manufacturing, PDCA remains common. In healthcare quality improvement, PDSA is the standard, largely because of IHI’s influence.

Common Pitfalls

Despite its simplicity, PDSA is frequently done poorly. A systematic review published in BMJ Quality & Safety found widespread issues with how teams apply the method in healthcare settings. The most common problems include skipping the prediction step in the Plan phase, running only a single cycle instead of iterating, and failing to collect or analyze data during the Study phase.

Time pressure is a recurring barrier. Teams report that quality improvement work often gets done “off the side of their desks,” making it difficult to commit to rigorous data collection and analysis. Without data, teams fall back on gut instincts, which defeats the purpose of using a structured method. Another frequent issue is losing sight of the original aim when scaling up. A change that worked well in a pilot can lose its impact when rolled out across multiple sites if the team hasn’t documented what made it work in the first place.

PDSA can also oversimplify genuinely complex problems. When the issue involves multiple interacting systems, vendor relationships, or conflicting organizational priorities, the neat four-step cycle may need to be supplemented with additional tools and more extensive stakeholder engagement. The cycle works best when teams have adequate mentorship, protected time, and a clear data collection plan before they begin.