Measuring quality improvement in healthcare requires tracking specific, standardized metrics over time and using analytical tools that can distinguish real improvement from normal fluctuation. The core framework, defined by the Centers for Medicare & Medicaid Services, organizes quality into six goals: effective, safe, efficient, patient-centered, equitable, and timely care. Each goal has corresponding measures that hospitals, clinics, and health plans use to quantify performance and demonstrate progress.
The Main Categories of Quality Measures
Quality measures fall into four broad types: process measures, outcome measures, patient experience measures, and structural measures. Process measures track whether the right steps were taken, like whether a patient with pneumonia received antibiotics within a certain window. Outcome measures track results, such as infection rates, readmission rates, or mortality. Patient experience measures capture what care felt like from the patient’s perspective. Structural measures assess whether an organization has the systems and resources in place to deliver good care, such as staffing ratios or electronic health record capabilities.
One of the most widely used sets of quality measures is HEDIS, maintained by the National Committee for Quality Assurance. It includes 87 measures spanning effectiveness of care, access and availability, experience of care, utilization, and health plan descriptive information. Health plans and providers use HEDIS to benchmark performance against national standards in areas like cancer screening rates, diabetes management, and immunization coverage.
Where the Data Comes From
Reliable measurement depends on good data, and healthcare organizations pull from several sources. Administrative claims data, including Medicare claims, are among the most practical and cost-effective sources for tracking selected components of quality. They capture diagnoses, procedures, and billing codes across large populations without requiring manual chart review.
Electronic health records provide richer clinical detail, including lab results, medication lists, and clinician notes. Patient surveys add a dimension that clinical data can’t capture. Medical record abstraction, where trained staff pull specific data points from charts, remains necessary for measures that aren’t reliably coded in claims or electronic systems. Disease-specific registries, like the National Cancer Institute’s SEER Program, offer deep longitudinal data on outcomes for particular conditions.
Most organizations use a combination of these sources. Claims data might flag a readmission, the electronic health record provides clinical context, and a patient survey reveals whether discharge instructions were clear enough.
Measuring Patient Experience
The HCAHPS survey is the national standard for measuring how patients perceive hospital care. It contains 29 questions (22 core) covering communication with nurses and doctors, staff responsiveness, hospital cleanliness and quietness, communication about medications, discharge information, care coordination, and information about symptoms after leaving. Patients also give an overall hospital rating and indicate whether they’d recommend the facility.
HCAHPS results are publicly reported, which means they influence both reputation and reimbursement. For quality improvement teams, this survey provides a direct line of sight into where care delivery breaks down from the patient’s point of view. A hospital might have excellent clinical outcomes but score poorly on communication about medications, signaling a specific, fixable gap.
Safety Standards and Accreditation Measures
The Joint Commission’s National Patient Safety Goals set baseline safety standards that function as prerequisites for accreditation. These goals are tailored to specific settings, including hospitals, surgical centers, nursing facilities, behavioral health, laboratories, and home care. They focus on accurate patient identification, medication and surgical safety, alarm safety, clinician communication, and prevention of hospital-acquired infections, falls, pressure ulcers, and inpatient suicide.
A recent addition highlights health care equity as a patient safety standard. Organizations are now expected to identify healthcare disparities in their patient populations and develop written plans to address them. This shifts equity measurement from an aspirational goal to an accreditation requirement.
Tools for Tracking Change Over Time
One of the most common mistakes in quality measurement is comparing a simple “before” and “after” snapshot and assuming any difference represents real improvement. Run charts and statistical process control (SPC) charts solve this problem by plotting data points over time and applying statistical rules to distinguish genuine change from random variation.
Run charts are the simpler of the two. They plot a measure, like hand hygiene compliance or wait times, against a median line and look for non-random patterns that signal a true shift. SPC charts go further by adding upper and lower control limits. Data points within those limits reflect common cause variation, meaning normal fluctuation baked into the system. Points outside those limits, or certain patterns within them, indicate special cause variation, meaning something meaningfully changed. This distinction matters because common cause variation requires system-level redesign, while special cause variation calls for investigating a specific event or intervention.
The PDSA Cycle for Testing Changes
The Plan-Do-Study-Act cycle is the most widely used framework for running quality improvement projects. It works by testing changes on a small scale before rolling them out broadly.
In the Plan phase, the team identifies a specific goal and predicts what will happen if they make a particular change. The Do phase implements that change on a small scale, deliberately limited so problems can be caught and corrected quickly. During the Study phase, the team compares predicted outcomes to actual outcomes, looking at the data honestly. The Act phase combines what was learned from all three prior stages into a refined plan, which then feeds into the next cycle.
The key principle is repetition. A single PDSA cycle rarely produces a finished solution. Each cycle sharpens the intervention, and over several rounds, the team converges on a change that reliably produces the desired result. Organizations often run multiple PDSA cycles in parallel for different aspects of the same problem.
Why Risk Adjustment Matters
Raw quality data can be misleading without accounting for the patients being treated. A hospital that serves older, sicker patients with multiple chronic conditions will naturally have higher mortality and readmission rates than one treating a younger, healthier population. Risk adjustment corrects for this by statistically accounting for factors that influence outcomes but are outside the provider’s control.
The variables used in risk adjustment include clinical factors like the number and severity of existing conditions, demographic factors like age and sex, functional factors like cognitive impairment, and social factors like income, education, and geography. Even medication patterns can serve as a proxy for illness severity, since the drugs a patient takes often reflect how advanced their condition is. Some models also include interaction terms, where two variables that are individually weak predictors combine to strongly predict an outcome, such as a prior mental health diagnosis interacting with another clinical variable.
Without risk adjustment, quality comparisons between providers are essentially meaningless. Two hospitals can deliver identical quality of care and produce very different raw outcome numbers simply because of differences in patient complexity.
Measuring Health Equity Gaps
Measuring equity means stratifying existing quality metrics by demographic and social factors to see whether care and outcomes differ across groups. A survey of U.S. healthcare systems found that race and ethnicity was the most commonly used filter for evaluating equity, applied in about 24% of all metric analyses. Sex followed at 17%, then age (14%), language (13%), payer type (11%), socioeconomic status (10%), and gender identity (8%).
The metrics most frequently examined through an equity lens relate to chronic disease management (24% of all reported metrics), with diabetes control and blood pressure management mentioned most often. Preventive care measures, including cancer screenings, mental health screenings, well-child visits, and immunizations, made up 16%. Acute care metrics, especially mortality, accounted for 15%. Patient experience (12%), social determinants of health (10%), and utilization and readmissions (9%) rounded out the picture.
This stratification turns a single quality measure into a diagnostic tool. A health system might find that its overall diabetes management rate looks strong, but when broken down by language preference or income level, a significant gap appears. That gap becomes the target for a focused improvement effort, and the same stratified metric tracks whether the gap narrows over time.

