The PDSA cycle is a four-stage method for testing and refining changes in a structured, low-risk way. It stands for Plan, Do, Study, Act, and it works by running small experiments, learning from the results, and then deciding whether to adopt, adjust, or abandon a change before scaling it up. Originally developed in 1939 by engineers Walter Shewhart and W. Edwards Deming at Bell Labs, the framework mirrors the scientific method: form a hypothesis, test it, examine the results, and report what you learned.
The Four Stages of a PDSA Cycle
Each stage has a specific purpose, and skipping or rushing any one of them undermines the whole process.
Plan: Identify what you’re trying to improve and predict what will happen if you make a specific change. This means defining your aim, choosing what you’ll measure, and mapping out who will be involved, what resources are needed, how long the test will run, and what data you’ll collect. The prediction matters because it gives you something concrete to compare your results against later.
Do: Carry out the change on a small scale. The emphasis here is on “small.” You might test with one staff member, one patient group, or one shift. During this phase, you document everything: what actually happened, any problems that came up, and anything unexpected. This isn’t the time to roll out a sweeping organizational change.
Study: Analyze the data you collected and compare it to your prediction. Did the change produce the outcome you expected? Were there unintended consequences, surprises, or failures? This is where most of the learning happens, and it’s the stage teams most often shortchange.
Act: Based on what you learned, choose one of three paths. Adopt the change and begin expanding it. Adapt it by modifying your approach and running another cycle. Or abandon it entirely and try a different idea. The Act phase always points forward to the next cycle.
How PDSA Fits Into the Model for Improvement
PDSA cycles don’t operate in a vacuum. They’re the testing engine inside a larger structure called the Model for Improvement, developed by the Institute for Healthcare Improvement. That model asks three questions before any testing begins: What are we trying to accomplish? How will we know that a change is an improvement? What change can we make that will result in improvement? These questions set the aim, define the measures, and generate the change ideas that PDSA cycles then put to the test. Without clear answers to all three, the cycles lack direction.
The Ramp: Scaling From Small to Large
One of the most practical features of PDSA is the “ramp” concept. You start with the smallest possible test to prevent disruptions in workflow and opinion. If that small test works, it generates proof of concept, which builds organizational consensus and stakeholder buy-in. Each subsequent cycle scales up: more patients, more staff, more units. Early success empowers the improvement team and earns support from colleagues to carry out progressively larger interventions. This stepwise approach is what separates PDSA from top-down mandates that often meet resistance because they skip the evidence-building phase entirely.
Tracking Change With Three Types of Measures
Effective PDSA cycles rely on measurement, and improvement teams typically track 4 to 10 measures across three categories.
- Outcome measures tell you how the system is performing for patients. These connect directly to your aim. Examples include readmission rates, mortality rates, or adverse drug events per 1,000 doses.
- Process measures tell you whether the steps in your system are working as planned. If your goal is better diabetes management, a process measure might be the percentage of patients who had their blood sugar levels checked twice in the past year.
- Balancing measures check whether your improvement in one area is creating problems somewhere else. For instance, if you’re reducing how long patients stay in the hospital, you need to make sure readmission rates aren’t climbing as a result.
Balancing measures are easy to overlook but essential. Improving one metric at the expense of another isn’t real improvement.
A Real-World Example: Reducing Hospital Readmissions
A urology department used three PDSA cycles to reduce readmissions after surgery. Their aim was specific: reduce emergency department and GP reattendance by 10% and improve patient satisfaction by 10%, both within three months.
In the first cycle, they made 60 follow-up phone calls at five days after discharge to collect baseline data and understand the nature of the problem. Patients told them the calls would be most useful between 48 and 72 hours after discharge, so the team adjusted the timing for the next round.
In the second cycle (30 calls), they found that 10% of patients had already been readmitted before the team could even reach them, confirming the urgency of earlier contact. They also discovered that 16% of patients would have gone to the emergency department or their GP without the phone call. When junior doctors reported that manually adding patients to tracking spreadsheets was burdensome, management shifted that task to administrative staff, making the process sustainable.
The result: reattendance rates dropped by 13% and patient satisfaction improved by 19.6%. Both targets were met within the three-month window. Each cycle generated a specific lesson that shaped the next one, which is exactly how PDSA is supposed to work.
How PDSA Differs From Six Sigma
Six Sigma uses a five-phase framework called DMAIC (Define, Measure, Analyze, Improve, Control) to improve existing processes. It tends to work best for complex, data-heavy problems where statistical analysis can identify root causes with precision. PDSA, by contrast, is built for speed and simplicity. Its scope is deliberately small, allowing teams to pivot quickly when something isn’t working. Organizations often use PDSA for rapid, iterative testing of frontline changes, while reserving DMAIC for larger systemic problems that require deeper statistical rigor.
Why PDSA Cycles Fail
The method is straightforward on paper, but several common barriers derail it in practice. The most frequently cited is not having enough time to monitor and study results. Teams get pulled into daily operations and treat the Study phase as optional, which defeats the purpose. One quality improvement participant described it bluntly: research is something they don’t have time for, and when it happens, it’s done “off the side of their desks.”
Lack of data is another major obstacle. When teams don’t have reliable numbers to inform decisions, they fall back on gut instinct and experience, which eliminates the evidence-based advantage PDSA is designed to provide. Conflicting priorities also undermine cycles. In healthcare food environments, for example, teams described constant tension between health benchmarks and financial viability. When the organization’s incentives pull in different directions, improvement efforts stall.
Past negative experiences create a subtler problem. If previous improvement initiatives failed or fizzled out, staff approach new PDSA cycles with skepticism. And scaling too fast, before proof of concept is established, often means the original intent gets lost. As one administrator described it: “We roll it out to the other sites around the cheaper and somehow, we lose sight of what was that all about.”
Documenting Each Cycle
A PDSA worksheet is the standard documentation tool, and filling it out thoroughly is what makes cycles auditable and learnable. The Centers for Medicare and Medicaid Services template captures the essentials: your aim, your predicted outcome, the specific change being tested, action steps with responsible persons and timelines, observations during the test, measured results compared to predictions, and a description of what modifications will be made for the next cycle. The worksheet forces discipline. Without it, cycles blur together and teams lose track of what they tested, what they learned, and why they changed course.

