Banner

Article

Is MIPS missing the mark?

Author(s):

Study shows program not always accurate in identifying doctors who deliver high-quality care

How good is Medicare’s Merit-based Incentive Payment Program (MIPS) at measuring the quality of care doctors provide? Not very, according to results of a new study.

Introduced in 2017 as a replacement for three previous quality measurement programs, MIPS’s goal was to improve patient care by financially rewarding or penalizing doctors according to their performance on specific “process” and “outcome” metrics in four broad areas: cost, quality, improvement activities, and promoting interoperability.

Participating doctors choose six metrics to report, one of which must be an outcome measure such as hospital admission for a specific illness or condition. MIPS is now the nation’s largest value-based payment program.

For the study, researchers analyzed data culled from Medicare datasets and claims records for 3.4 million patients who received care from approximately 80,000 primary care doctors in 2019. They compared doctors’ overall MIPS scores with their scores on five process measures, such as breast cancer screening, tobacco screening, and diabetic eye examinations; and six outcome measures, such as emergency department (ED) visits and hospitalizations.

The results revealed no consistent association between performance on the measures and overall MIPS scores. For example, while doctors with low MIPS scores performed significantly worse on average than doctors with high MIPS scores on three of the five process measures studied, they performed slightly better on the other two.

On the outcome measures, low-scoring doctors did significantly better on the metric of ED visits per 1,000 patients, significantly worse on all-cause hospitalizations per 1,000 patients, and not significantly different on the other four measures.

Similarly, 19% of physicians with low MIPS scores performed in the top composite outcomes performance quintile, while 21% of those with high MIPS scores had outcomes in the bottom quintile.

“What these results suggest is that the MIPS program’s accuracy in identifying high- versus low-performing providers is really no better than chance,” Amy Bond, Ph.D., an assistant professor of population health sciences at Weill Cornell Medicine and the study’s lead author said in an accompanying news release.

The authors offer several possible explanations for their findings. Among them: the difficulty of making meaningful comparisons when doctors are allowed to choose the measures they report on; the fact that—as other research has demonstrated—many program measures are either invalid or of uncertain validity and thus may not be linked to better outcomes; and that good scores may reflect the ability to collect, analyze and report data rather than actually providing better medical care.

The latter conclusion, they say, is supported by the finding that participants with low MIPS scores were more likely to work in small and independent practices yet often had clinical outcomes similar to those of doctors in large, system-affiliated practices with high MIPS scores.

“MIPS scores may reflect doctors’ ability to keep up with MIPS paperwork more than it reflects their clinical performance,” Bond said.

The study, “Association Between Individual Primary Care Physician Merit-based Incentive Payment System Score and Measure of Process and Patient Outcomes” was published December 6 in JAMA.

Related Videos
© Mathematica - The Commonwealth Fund
© Mathematica - The Commonwealth Fund
© Mathematica - The Commonwealth Fund
© Mathematica - The Commonwealth Fund