A PDSA is a four-stage cycle used to test and improve processes, most commonly in healthcare and manufacturing. The letters stand for Plan, Do, Study, Act. Rather than overhauling an entire system at once, a PDSA cycle encourages you to make one small change, measure what happens, learn from it, and then decide whether to keep, adjust, or abandon the change before trying again.
The method originated in industry through the work of Walter Shewhart and W. Edwards Deming, who formalized iterative improvement into a repeatable framework. It has since become a cornerstone of quality improvement in hospitals, clinics, public health programs, and many other fields.
The Four Stages
Each stage has a distinct purpose, and skipping or rushing through any one of them is the most common reason PDSA efforts fail.
Plan. You start by identifying a specific goal and the change you want to test. This stage also requires making a prediction: what do you think will happen, and why? You decide who will be involved, what data you need to collect, and how long the test will run. The prediction matters because it forces you to articulate your reasoning before you have results, which makes it far easier to learn something useful afterward.
Do. You carry out the test on a small scale. This might mean trying a new workflow with one staff member during a single shift, or piloting a new checklist with one patient. During this stage, you document everything: what actually happened, any problems, and any surprises. The key word is small. A tight scope means you can catch issues early and adjust without wasting resources.
Study. You compare your actual results to the predictions you made in the Plan stage. Did the change produce the outcome you expected? Were there unintended consequences? This is where the real learning happens. You’re not just asking “did it work?” but “why did it work, or why didn’t it?”
Act. Based on what you learned, you choose one of three paths: adapt the change and run another cycle, adopt it and expand to a larger scale, or abandon it entirely and try a different approach. If the change looks promising, you prepare a plan for the next, slightly larger test.
The Three Questions That Frame Every Cycle
PDSA cycles are typically used within a broader structure called the Model for Improvement. Before you ever start planning a test, you answer three guiding questions:
- What are we trying to accomplish?
- How will we know whether a change is an improvement?
- What changes can we make that will result in improvement?
These questions prevent a common trap: jumping straight into action without clearly defining what success looks like or how you’ll measure it. The first question becomes your aim statement. The second forces you to choose specific, measurable indicators. The third generates the ideas you’ll actually test through PDSA cycles.
How Cycles Build on Each Other
A single PDSA cycle rarely solves a problem. The method is designed to be iterative, with each cycle building on what you learned in the last one. This sequence of linked cycles is called a PDSA ramp.
The idea is to start as small as possible, sometimes called the “power of one”: test with one person, one shift, or one case. Only when results look promising do you increase the scale. A practical example illustrates how this works. Say a home health program wants to improve breastfeeding rates among new mothers. In Cycle 1, one home visitor tests a new infant feeding plan with a single mother. In Cycle 2, the team revises the plan based on what they learned and has the same visitor use it with more mothers while a second visitor also tries it. By Cycle 3, the plan has been refined twice and is tested with more staff and families. Cycle 4 rolls it out across the entire program.
There’s no required number of cycles before you scale up. The rule is simple: only expand when your data show the change is working.
PDSA vs. PDCA
You’ll sometimes see the acronym PDCA, which stands for Plan, Do, Check, Act. The difference is in the third step. Deming himself preferred “Study” over “Check” because he felt that “Check” encourages a narrow pass/fail judgment: did the plan work or not? “Study,” by contrast, pushes you to compare your predictions against actual outcomes and build a deeper understanding of why things happened the way they did. The distinction sounds subtle, but it shifts the entire mindset from evaluating success to generating knowledge.
A Real-World Example
In one pathology lab, clinicians were frustrated that turnaround times for test results were unpredictable, with only 30% to 60% of cases signed out within five days depending on the week. Using a PDSA cycle, the team tested a simple resource change: reallocating a quarter of one employee’s time to the accessioning station during afternoon hours. After the first cycle, their weekly success rate for meeting the turnaround target jumped from 51% to 69%. That single small test gave the team concrete evidence to justify expanding the staffing change and running further cycles to refine it.
Why PDSA Cycles Often Go Wrong
Despite its apparent simplicity, PDSA is frequently done poorly. A systematic review in BMJ Quality & Safety found that the biggest problem is oversimplification: teams treat the cycle as a loose suggestion rather than a disciplined method. Several specific failure patterns show up repeatedly.
The most common is rushing from Plan to Do without adequate preparation. Healthcare environments have a cultural pull toward “just get on with it,” which leads teams to skip the prediction step, collect vague data, or fail to define what they’re actually measuring. Once in the Do phase, many teams get stuck there and never progress to Study. They implement a change and move on without analyzing the results, which defeats the entire purpose.
Another frequent mistake is treating PDSA as a standalone tool. It works best when embedded within a larger improvement framework, with clear aims, defined measures, and organizational support. Teams also regularly fail to plan for what happens if the change works. Without a sustainability plan, performance often reverts to previous levels, and staff become frustrated and disengaged from future improvement efforts.
The resources and skills required to apply PDSA well are consistently underestimated. Running a rigorous cycle requires time to plan, discipline to collect data, and analytical ability to interpret results. Organizations that invest in training and dedicated improvement time get far better outcomes than those that bolt PDSA onto already-overwhelmed teams.
Documenting a PDSA Cycle
Most organizations use a simple worksheet to track each cycle. The Centers for Medicare & Medicaid Services publishes a widely used template that captures the essential information for each stage:
- Plan: What change are you testing? What do you predict will happen, and why? Who is involved? How long will it take? What data will you collect?
- Do: What actually happened? What problems or unexpected findings came up?
- Study: How did measured results compare to your predictions? What did you learn, including surprises and unintended consequences?
- Act: Will you adapt, adopt, or abandon? What modifications go into the next cycle?
Writing things down sounds obvious, but it’s essential. Without documentation, teams lose track of what they’ve already tested, repeat mistakes, and can’t communicate their findings to others. A completed worksheet also creates a record that justifies scaling up a successful change or explains why a different direction was chosen.

