Monday, April 13, 2009

Dangerous Care


The following article from the Wall Street Journal (Why 'Quality' Care Is Dangerous - WSJ.com) could not define the problem of “cookbook” medicine any better. Strict adherence to rules that change is detrimental to the health of patients and is not something we should strive for.

Good physicians will use all available clinical guidelines and current research but they understand the importance of making decisions based on the individual patient. Placing barriers on them is not beneficial

The Obama administration is working with Congress to mandate that all Medicare payments be tied to "quality metrics." But an analysis of this drive for better health care reveals a fundamental flaw in how quality is defined and metrics applied. In too many cases, the quality measures have been hastily adopted, only to be proven wrong and even potentially dangerous to patients.

Health-policy planners define quality as clinical practice that conforms to consensus guidelines written by experts. The guidelines present specific metrics for physicians to meet, thus "quality metrics." Since 2003, the federal government has piloted Medicare projects at more than 260 hospitals to reward physicians and institutions that meet quality metrics. The program is called "pay-for-performance." Many private insurers are following suit with similar incentive programs.

In Massachusetts, there are not only carrots but also sticks; physicians who fail to comply with quality guidelines from certain state-based insurers are publicly discredited and their patients required to pay up to three times as much out of pocket to see them. Unfortunately, many states are considering the Massachusetts model for their local insurance.

How did we get here? Initially, the quality improvement initiatives focused on patient safety and public-health measures. The hospital was seen as a large factory where systems needed to be standardized to prevent avoidable errors. A shocking degree of sloppiness existed with respect to hand washing, for example, and this largely has been remedied with implementation of standardized protocols. Similarly, the risk of infection when inserting an intravenous catheter has fallen sharply since doctors and nurses now abide by guidelines. Buoyed by these successes, governmental and private insurance regulators now have overreached. They've turned clinical guidelines for complex diseases into iron-clad rules, to deleterious effect.

One key quality measure in the ICU became the level of blood sugar in critically ill patients. Expert panels reviewed data on whether ICU patients should have insulin therapy adjusted to tightly control their blood sugar, keeping it within the normal range, or whether a more flexible approach, allowing some elevation of sugar, was permissible. Expert consensus endorsed tight control, and this approach was embedded in guidelines from the American Diabetes Association. The Joint Commission on Accreditation of Healthcare Organizations, which generates report cards on hospitals, and governmental and private insurers that pay for care, adopted as a suggested quality metric this tight control of blood sugar.

A colleague who works in an ICU in a medical center in our state told us how his care of the critically ill is closely monitored. If his patients have blood sugars that rise above the metric, he must attend what he calls "re-education sessions" where he is pointedly lectured on the need to adhere to the rule. If he does not strictly comply, his hospital will be downgraded on its quality rating and risks financial loss. His status on the faculty is also at risk should he be seen as delivering low-quality care.

But this coercive approach was turned on its head last month when the New England Journal of Medicine published a randomized study, by the Australian and New Zealand Intensive Care Society Clinical Trials Group and the Canadian Critical Care Trials Group, of more than 6,000 critically ill patients in the ICU. Half of the patients received insulin to tightly maintain their sugar in the normal range, and the other half were on a more flexible protocol, allowing higher sugar levels. More patients died in the tightly regulated group than those cared for with the flexible protocol.

Similarly, maintaining normal blood sugar in ambulatory diabetics with vascular problems has been a key quality metric in assessing physician performance. Yet largely due to two extensive studies published in the June 2008 issue of the New England Journal of Medicine, this is now in serious doubt. Indeed, in one study of more than 10,000 ambulatory diabetics with cardiovascular diseases conducted by a group of Canadian and American researchers (the "ACCORD" study) so many diabetics died in the group where sugar was tightly regulated that the researchers discontinued the trial 17 months before its scheduled end.

And just last month, another clinical trial contradicted the expert consensus guidelines that patients with kidney failure on dialysis should be given statin drugs to prevent heart attack and stroke.

These and other recent examples show why rigid and punitive rules to broadly standardize care for all patients often break down. Human beings are not uniform in their biology. A disease with many effects on multiple organs, like diabetes, acts differently in different people. Medicine is an imperfect science, and its study is also imperfect. Information evolves and changes. Rather than rigidity, flexibility is appropriate in applying evidence from clinical trials. To that end, a good doctor exercises sound clinical judgment by consulting expert guidelines and assessing ongoing research, but then decides what is quality care for the individual patient. And what is best sometimes deviates from the norms.

Yet too often quality metrics coerce doctors into rigid and ill-advised procedures. Orwell could have written about how the word "quality" became zealously defined by regulators, and then redefined with each change in consensus guidelines. And Kafka could detail the recent experience of a pediatrician featured in Vital Signs, the member publication of the Massachusetts Medical Society. Out of the blue, according to the article, Dr. Ann T. Nutt received a letter in February from the Massachusetts Group Insurance Commission on Clinical Performance Improvement informing her that she was no longer ranked as Tier 1 but had fallen to Tier 3. (Massachusetts and some private insurers use a three-tier ranking system to incentivize high-quality care.) She contacted the regulators and insisted that she be given details to explain her fall in rating.

After much effort, she discovered that in 127 opportunities to comply with quality metrics, she had met the standards 115 times. But the regulators refused to provide the names of patients who allegedly had received low quality care, so she had no way to assess their judgment for herself. The pediatrician fought back and ultimately learned which guidelines she had failed to follow. Despite her cogent rebuttal, the regulator denied the appeal and the doctor is still ranked as Tier 3. She continues to battle the state.

Doubts about the relevance of quality metrics to clinical reality are even emerging from the federal pilot programs launched in 2003. An analysis of Medicare pay-for-performance for hip and knee replacement by orthopedic surgeons at 260 hospitals in 38 states published in the most recent March/April issue of Health Affairs showed that conforming to or deviating from expert quality metrics had no relationship to the actual complications or clinical outcomes of the patients. Similarly, a study led by UCLA researchers of over 5,000 patients at 91 hospitals published in 2007 in the Journal of the American Medical Association found that the application of most federal quality process measures did not change mortality from heart failure.

State pay-for-performance programs also provide disturbing data on the unintended consequences of coercive regulation. Another report in the most recent Health Affairs evaluating some 35,000 physicians caring for 6.2 million patients in California revealed that doctors dropped noncompliant patients, or refused to treat people with complicated illnesses involving many organs, since their outcomes would make their statistics look bad. And research by the Brigham and Women's Hospital published last month in the Journal of the American College of Cardiology indicates that report cards may be pushing Massachusetts cardiologists to deny lifesaving procedures on very sick heart patients out of fear of receiving a low grade if the outcome is poor.

Dr. David Sackett, a pioneer of "evidence-based medicine," where results from clinical trials rather than anecdotes are used to guide physician practice, famously said, "Half of what you'll learn in medical school will be shown to be either dead wrong or out of date within five years of your graduation; the trouble is that nobody can tell you which half -- so the most important thing to learn is how to learn on your own." Science depends upon such a sentiment, and honors the doubter and iconoclast who overturns false paradigms.

Before a surgeon begins an operation, he must stop and call a "time-out" to verify that he has all the correct information and instruments to safely proceed. We need a national time-out in the rush to mandate what policy makers term quality care to prevent doing more harm than good.

Dr. Groopman, a staff writer for the New Yorker, and Dr. Hartzband are on the staff of Beth Israel Deaconess Medical Center in Boston and on the faculty of Harvard Medical School.

Labels: , ,

0 Comments:

Post a Comment

<< Home