Hospital mortality rates generate more debate โ and more misuse โ than any other quality metric I work with. Done properly, they're one of the most powerful tools for comparing hospital quality. Done sloppily, they're actively misleading. This piece is about doing it properly.
Raw Mortality vs. Risk-Adjusted Mortality
Raw mortality rate = number of deaths / number of patients. This number tells you almost nothing useful about hospital quality, because it doesn't account for who the patients are.
A major academic medical center that accepts complex transfers, manages multi-organ failure, and performs high-risk surgeries on patients with three comorbidities will have a higher raw mortality rate than a community hospital that manages routine cases. That doesn't mean the academic center is less safe โ it often means the opposite.
Risk-adjusted mortality controls statistically for the severity and case mix of the patient population. CMS uses claims-based risk adjustment models that account for age, comorbidities, and diagnosis severity. The question becomes: given this hospital's patient population, how does their death rate compare to what we'd predict for a similar mix of patients nationally?
How CMS Reports Mortality
CMS reports 30-day risk-adjusted mortality for six conditions: heart attack (AMI), heart failure, pneumonia, COPD, hip/knee replacement, and CABG surgery. They express results as "better than the national rate," "no different from the national rate," or "worse than the national rate" based on 95% confidence intervals.
๐ Reading Mortality Data Correctly
- "Better than national rate" โ Hospital's mortality is statistically significantly lower than the national benchmark. A genuine quality signal.
- "No different from national rate" โ This covers about 80% of hospitals and a wide performance range. It doesn't mean the hospital is average; it means we can't statistically distinguish it from average.
- "Worse than national rate" โ Statistically significantly higher mortality than expected. This is a meaningful red flag.
Why "No Different" Doesn't Mean Average
The "no different" category is where most of the meaningful variation in hospital quality actually lives. A hospital performing genuinely well but serving a complex patient population may land in "no different" because the confidence intervals are wide. A hospital with genuinely problematic care may also land in "no different" because they see too few cases for statistical significance to kick in.
This is why composite scores like our SafeHospitals USA Safety Score are valuable โ they aggregate across many measures, allowing meaningful differentiation even where individual measures lack statistical power.
The 30-Day Measurement Window
CMS measures deaths within 30 days of admission โ including deaths that occur after discharge. This is intentional. It captures deaths that result from complications, inadequate post-discharge care, and premature discharge. A patient discharged after a heart attack who dies at home on day 22 still counts against the hospital's mortality rate.
This design feature is important for patients to understand: CMS is measuring the quality of the entire episode of care, not just in-hospital events.