Miscellaneous Articles

Public Release of Clinical Outcomes Data — Online CABG Report Cards

Public Release of Clinical Outcomes Data — Online CABG Report Cards

This article (10.1056/NEJMp1009423) was published on September 7, 2010, at NEJM.org.

On September 7, 2010, Consumers Union (publisher of Consumer Reports) reported the results of coronary-artery bypass grafting (CABG) procedures at 221 U.S. cardiac surgery programs. The voluntary reporting of risk-adjusted outcomes in approximately 20% of U.S. cardiac surgery programs is a watershed event in health care accountability.

The reported ratings derive from a registry developed by the Society of Thoracic Surgeons (STS) in 1989. More than 90% of the approximately 1100 U.S. cardiac surgery programs participate in the registry. Registry data are collected from patients' charts and include key outcomes such as complications and death, the severity of preoperative illness, coexisting conditions, surgical technique, and medications. These data are maintained by the Duke Clinical Research Institute and are analyzed with the use of well-tested statistical methods. The data-collection and auditing methods, specifications of the measures, and statistical approaches have evolved over the course of two decades and reflect a substantial commitment by cardiac surgeons and their leadership.

For years, participants in the STS registry have been examining these data and using them to make improvements. What does the public now get to see? Each surgical program that has chosen to make its data public is assigned a rating of one, two, or three stars. Stars are assigned on the basis of results on 11 performance measures that have been endorsed by the National Quality Forum (see table below).

The rating depends on whether the risk-adjusted outcomes in a program fall below, are equal to, or exceed the average performance range. The performance thresholds are designed to ensure a 99% probability that outlier programs — those rated significantly below or above the mean and therefore given one and three stars, respectively — are truly below or above average. With the use of this method, 23 to 27% of the programs have been identified as outliers over the past 3 years. In addition to the star rating for overall performance, consumers see the star rating and actual performance scores (on a scale from 0 to 100) in four subcategories: 30-day survival (“patients have a 98% chance of surviving at least 30 days after the procedure and of being discharged from the hospital”), complications (“patients have an 89% chance of avoiding all five of the major complications”), use of appropriate medications (“patients have a 90% chance of receiving all four of the recommended medications”), and surgical technique (“patients have a 98% chance of receiving at least one optimal surgical graft”).

The move on the part of the STS to make results available to the public will certainly trigger a cascade of responses. Advocates of transparency will point to the shortcomings of the ratings — the voluntary and therefore selective participation of programs (50 of the programs that have chosen to report their data have received three stars, whereas only 5 have received one star), the lack of long-term outcomes (e.g., 10-year survival, graft patency, and functional improvement), and the lack of physician-specific ratings. Expect such advocates to push for more. Nonparticipating cardiac surgery programs will come under pressure to allow the outcomes in their programs to be reported. Physicians in other surgical specialties that are amenable to this type of approach, such as orthopedics or vascular surgery, may be expected to follow suit. And this event will fuel the debate regarding the risks and benefits of public reporting, including the question of whether it assists patients in discriminating among sites of care. While these issues play out, several aspects of this release of ratings deserve attention.

First, years of pressure from policymakers, health care purchasers, and patient-advocacy groups to provide greater accountability played a major role in bringing this publication to fruition. Public reporting of outcomes has widespread support, and cardiac surgeons have been among the principal targets of these efforts. The first statewide report card on cardiac surgical performance was mandated in New York in 1989. Early experiences with public reporting of the outcomes of cardiac surgery spurred efforts by the STS and others to improve cardiac surgery. Although some consumer advocates pushing for transparency may view this release as a glass four-fifths empty — given the selectivity and number of programs reporting — the external pressure has been critical in stimulating improvement efforts within the medical profession.

Second, the publication of definitive analyses derived from clinical data can be a double-edged sword for providers. When performance reports are based on administrative data, physicians often justifiably argue that the data are flawed and the conclusions suspect. In contrast, with these new ratings, not only have the participants endorsed the methods, but they have volunteered to display performance results that carry the imprimatur of the physicians' specialty society. Experience with performance reporting in Massachusetts has shown that when the data and analyses are as good as possible, a public report of suboptimal performance requires a substantive public response: state Department of Public Health officials suspended a Massachusetts cardiac surgery program to conduct an external review, amidst substantial media attention, when the program was identified as a high-mortality outlier.

Third, the process of moving clinical data from the STS registry into the public domain has been long, complex, and expensive. As a member-supported organization, the STS navigated treacherous waters to bring its members to the point of permitting the publication of their data. Some key decisions facilitated this process: the STS reported group-level rather than physician-level data, rigorously validated its data-collection and risk-adjustment models, and selected a performance-classification system that maximized specificity. Such choices helped to mitigate physicians' biggest fear: the risk of misclassification. Moreover, cardiac surgery programs have been looking at these data for years, so there shouldn't be any surprises. The success that the STS has had in leading a nontrivial fraction of its members to agree to participate suggests that public reporting can be done in a way that doesn't alienate the profession.

There is no question about the need for accountability on the part of health care providers or the central role of measurement in the improvement of health care. Nonetheless, questions remain about the role of public reporting in improving health care. Performance measurements audited by regulators are one alternative, especially in situations in which the information is too complex for patients to use in discriminating among care sites. Insofar as public reporting drives improvement of all outcomes, it benefits everyone; insofar as risk aversion leads to changes in the population receiving an indicated service, the net effect can be nil or even negative. Given the heterogeneity in the delivery of medical services, it should come as no surprise that we have developed multiple methods for assessing performance and encouraging accountability. Regardless of which approach proves most beneficial to patients, public reporting will increasingly be a fact of life for physicians.

By publishing ratings using the best available data, the STS has responded to the public in a way that attempts to both inform patients and mitigate physicians' fears. We hope that the experience of the STS can be applied to other initiatives that are aimed at bringing performance data derived from clinical sources to the public, thereby reducing the time and expense of this process. For example, this experience may contain lessons for the Centers for Medicare and Medicaid Services as it prepares to handle the wave of clinical data it will receive through the Physician Quality Reporting Initiative and the “meaningful use” program for electronic health records. At least some of these data will almost certainly be publicly reported. The STS's success suggests that reporting can be done in a way that physicians will support. Whether the STS approach is an anomaly or a precedent that other specialty groups will emulate remains to be seen.

This article (10.1056/NEJMp1009423) was published on September 7, 2010, at NEJM.org.
Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.
Source Information
From the Massachusetts General Physicians Organization, Massachusetts General Hospital, Boston.