Thursday, June 21, 2007
Hospital Physician Relationships - Hospital Quality Ratings
Useful Tools or a Flawed Exercise in Futility?
Perhaps I’m a contrarian. Perhaps I’m overly pragmatic. Perhaps I’m cynical when I always consider the source. Or maybe, just maybe, based on my experience on the Medical Advisory Board of America’s Top Doctors, I trust doctors’ judgments on who are the best doctors more than mere data. Reputation, in other words, in my mind often trumps data.
I know these attitudes must sound naïve, but that’s how the mind of many of us non-managerial types work. Reputation, and related factors, word of mouth, personal knowledge, private referrals, and closeness to home are more powerful than often conflicting ratings and data emanating from the Internet, government, or media sources.
In any event, three articles, one in Medscape’s General Journal, one in the AMA News, and the other in the New York Times, prompt this blog.
•The Medscape piece, posted 6/1/07, appeared first in the Journal of Hospital Medicine. It bears the title “Conflicting Measures of Hospital Quality from ‘Hospital Compare Vs the U.S. News and World Report’s Best Hospitals.” The article notes that in April 2005 CMS launched “Hospital Compare,” the first government sponsored hospital quality score care. The authors compared government rankings vs. U.S. News and World Report’s Best Doctors. Here’s what the authors’ conclude.
The Best Hospitals lists and Hospital Compare core measure scores agree only a minority of the time on the best institutions for the care of cardiac and respiratory conditions in the United States. Prominent, publicly reported hospital quality scorecards that paint discordant pictures of institutional performance potentially present a conundrum for physicians, patients, and payers with growing incentives to compare institutional quality.
If the movement to improve health care quality is to succeed, the challenge will be to harness the growing professional and lay interest in quality measurement to create rating scales that reflect the best aspects of Hospital Compare and the Best Hospitals lists, with the broadest inclusion of institutions and scope of conditions. For example, it would be more helpful to the public if the Best Hospitals lists included available Hospital Compare measures. It would also benefit consumers if Hospital Compare included more metrics about preventive and elective procedures, domains in which consumers can maximally exercise their choice of health care institutions. Moreover, voluntary reporting may constrain the quality effort. Only with mandatory reporting on quality measures will consistent and sufficient institutional accountability be achieved.
• The second article, in the June 18, 2007 AMA News, has this title and subtitle, “Coping with Rankings: More Plans are Rating Physicians, but Patients Aren’t Keeping Score. Doctors Still Have Time to Pressure Insurers for Accurate Data or None at All.” The article says demand for publicly available performance measurements are up, and corporations are demanding this data more and more. In response, the AMA-led Physician Consortium for Performance Improvement and CMS has created 184 measurements it believes provide accurate evidence-based measures and outcome data. The trend is inevitable, so doctors are learning to shape new data based on physician negotiation with health plans. Harris Interactive Polling indicate less than 1% of patients use ratings to select their doctor. Most patients still rely on word of mouth. But health plans and corporations may think differently. They believe ratings are an effective tool for identifying the best doctors.
•And then there’s the New York Times June 14 revelation In Health Care, “Cost Isn’t Proof of High Quality.” Reed Abelson, the reporter, leads off with this commentary,
“Stark evidence that high medical payments do not necessarily buy high-quality patient care is presented in a hospital study set for release today. In a Pennsylvania government survey of the state’s 60 hospitals that perform heart bypass surgery, the best-paid hospital received nearly $100,000, on average, for the operation while the least-paid got less than $20,000. At both, patients had comparable lengths of stay and death rates.
And among the 20 hospitals serving metropolitan Philadelphia, two of the highest paid actually had higher-than-expected death rates, the survey found.
Hospitals say there are numerous reasons for some of the high payments, including the fact that a single very expensive case can push up the averages.”
Still, the Pennsylvania findings support a growing national consensus that as consumers, insurers and employers pay more for care, they are not necessarily getting better care. Expensive medicine may, in fact, be poor medicine. “
The Times has been running a series of articles on medicine and money, and have reached the conclusions that doctors and hospitals vary in their pricing, that their results vary, that greed and fraud and bill padding must be involved, and that surely, if everyone including the public and payers, knew and understood the data, and the underlying costs, more homogeneous, more favorable, and lower pricing would logically follow.
he reporter concludes:
As eye-opening as the Pennsylvania report may be to the public, insurers have already been aware that their payment practices do not necessarily encourage hospitals to provide better care. Medicare, for example, pays essentially a flat fee, which varies depending on location and type of hospital, for the same surgery, regardless of outcome. Complications tend to simply mean additional payments.”
The basic idea behind data-seekers, as I see it, is that we should pay more for better care and less for lesser care. That intuitively makes good sense. It may sound simple, but it is not, and the process can be distorted by a few cases with horrible outcomes and by hospitals’ locations and their varying constituencies and their expectations.
It is fashionable in government and managerial circles to say that anything that can measured can be managed. It is also fashionable, in conservative circles, to say that health care is so full on interactive permutations and combinations and personal considerations and choices that only the impersonal forces of the market can sort it all out. Finally, there are those who say hospital care is too “secretive,” and therefore only data and total “transparency” can identify the best, reveal the outliers, and smooth out cost and quality variabilities.
All of these critics may be right and yet all of them may be wrong. Unfortunately, variable costs, variable quality, and variable outcomes are a function of humanity, regional cultures and their constituencies. Independent variables are part of the human condition. Some of these variations may be beyond managerial control. Health costs are 30% higher in Boston, because the Boston public is enamored with the halo of quality surrounding academic medical centers. Health costs in Miami are higher because the seniors there may consider health care “free” because of Medicare and Medigap policies. In New York City, last illnesses are 40% higher at Columbia Presbyterian than at the Mayo Clinic because people have different cultural expectations, e.g. access to the best independent specialists. Variation in cost, quality, and outcomes is not always about greed or fraud or mismanagement. They may be about cultural expectations.
I remember Peter F. Drucker’s comment, “Large health care organizations may be the most complex organizations in human history.” It’s going to take a while to establish criteria to judge and sort out the good, the bad, and the ugly. Public disclosure of outcome data and performance data on the processes of care may help, but they are only part of a complicated human equation.