Showing posts with label Quality indicators. Show all posts
Showing posts with label Quality indicators. Show all posts

Wednesday, March 16, 2016

Why hospital rankings are bogus

At the end of 2015, The Leapfrog Group announced its annual list of America’s top hospitals for quality and safety; 98 hospitals receiving the honor.

Unlike some other hospital rating schemes, Leapfrog’s does not factor in reputation. You won’t find any of the usual suspects on Leapfrog’s list. Instead, Leapfrog uses surveys of hospitals and publicly available quality and safety data.

Leapfrog’s top 98 included 62 urban, 24 rural, and 12 children’s hospitals. Of the 86 urban and rural hospitals, only three were university hospitals—University of California Davis Medical Center, University of California Irvine Medical Center, and University of Tennessee Medical Center.

New York managed to place only one hospital on the Leapfrog list.

Other interesting anomalies are that for several states such as Connecticut, Indiana, and Maryland, no hospitals made the list, and of the 21 California hospitals that did, 17 are Kaiser-affiliated. Looks like Kaiser knows how to play the game.

Wednesday, August 12, 2015

Why in-hospital deaths are not a good quality measure

You may be tired of hearing about the Surgeon Scorecard—the surgeon rating system that was recently released by an organization called ProPublica. Like many others, I have pointed out some flaws in it. You can read my previous posts here and here.

I had decided to stop commenting about it because enough is enough, but a recent paper in the BMJ raises a question about one of the criteria ProPublica used to formulate its ratings.

ProPublica defined complications 1) as any patient readmission within 30 days and 2) "any patient deaths during the initial surgical stay."

The authors of the BMJ paper randomly selected 100 records of patients who died at each of 34 hospitals in the United Kingdom. The 3400 records were reviewed by experts to determine whether a death could have been avoided if the quality of care had been better.

The number of patient records in which a death was at least 50% likely to have been avoidable was 123 or 3.6%.

There was a very weak association between the number of preventable deaths and the overall number of deaths occurring at each hospital. By two measures of overall hospital deaths, the hospital standardized mortality ratio and the summary hospital level mortality indicator, the correlation coefficient between avoidable deaths and all deaths was 0.3, not statistically significant.

From the paper: "The absence of even a moderately strong association is a reflection of the small proportion of deaths (3.6%) judged likely to be avoidable and of the relatively small variation in avoidable death proportions between trusts [hospitals]. This confirms what others have demonstrated theoretically—that is, no matter how large the study the signal (avoidable deaths) to noise (all deaths) ratio means that detection of significant differences between trusts is unlikely."

The Surgeon Scorecard was derived from administrative data. No individual analysis of patient deaths was undertaken. According to a ProPublica article discussing some key questions about their methodology, "As for deaths, we took a conservative approach and only included those that occurred in the hospital within the initial stay."

Maybe that wasn't such a conservative approach after all.

And maybe we need to rethink that 2013 paper claiming that medical error caused up to 440,000 deaths per year.

Tuesday, November 5, 2013

Should all surgeons have video assessments of their skills?

Last month, a superb study by the Michigan Bariatric Surgery Collaborative showed that the more skilled surgeons were, the better were their outcomes.

Surgeons submitted a video of their choice depicting their performance of a laparoscopic gastric bypass. Since it was self-selected, it was presumably their best work. At least 10 of their peers, blinded as to the name of the surgeon, rated skills on the video which had been edited to include only the key portions of the case.

Surgeons in the lowest quartile of ratings for surgical skill had significantly more postoperative complications, readmissions, reoperations, and deaths.

A New York Times article about the paper features a couple of short video clips—one from a not-so-skilled and one from a very skilled surgeon. The differences are obvious and dramatic.

According to the discussion section of the paper, the Michigan bariatric surgeons are now watching each other operate and will soon be receiving anonymous feedback about their technique from their peers.

It is not clear whether this will improve the skills of the lower-rated surgeons or have any effect on outcomes.

Many people rightfully praised the research. Some suggested that all surgeons should be scrutinized in this same fashion.

I agree that the study was well-done and shows that technically better surgeons have better outcomes.

But there are some problems with generalizing this to all surgeons.

The American Board of Surgery recently noted that there are almost 30,000 board-certified general surgeons in the US. This raises a number of logistical issues.

Let's say we focus on the most common major surgical procedure—laparoscopic cholecystectomy, 10 surgeon-raters would have to view at least 15 to 20 minutes of video for each of the 30,000 board-certified general surgeons. How long would that take? Who would collect and edit all the videos? Who would make sure that the ratings were consistent? Who would collate and distribute the results? How would follow-up be done? Who would pay for all of this?

And that is just for the board-certified general surgeons. What about the general surgeons who are not board-certified and all the other surgical specialists? Maybe gastroenterologists should have their endoscopy procedures scrutinized. Maybe primary care docs should have selected office visits recorded too.

This is similar to the enthusiasm which surrounded the concept of using retired surgeons to coach other surgeons. The idea was based on the experience of one surgeon, who had access to an expert coach and wrote about it. I blogged about the logistical difficulties that would preclude coaching from becoming widespread. To my knowledge in the two years since I wrote that post, coaching has not caught on as a performance improvement measure.

It's too bad, because in an ideal world, video evaluation of operative procedures and coaching would be great. Unfortunately, we don't live in an ideal world.


Thursday, October 10, 2013

Reviewing three studies that question dogma



I like studies that question accepted practices. I also like to question studies that question accepted practices. [See this post about discrediting discredited practices.]

Here are three new studies with surprising and thought-provoking results.

A few years ago, the idea of rapid response teams surfaced. These teams were supposed to be called when patients on regular floors became unstable. It was thought that such teams would be able to intervene more rapidly than simply paging the patient's physician.

Every hospital established rapid response teams, and early studies tended to confirm that they were efficacious. So all is well.

But a paper from the journal Critical Care Medicine shows that rapid response teams increase costs and intensive care unit admissions without showing any improvement in risk-adjusted patient outcomes.

Naysayers will complain that it wasn't a randomized prospective double-blind study. But it was a large before-and-after cohort study from a respected institution, the Mayo Clinic.

The authors concluded that hospitals should at least evaluate their own experiences with rapid response teams.

Another study, this time in JAMA, questions the validity of using rates of venous thromboembolic events as markers of hospital quality.

It seems the more diligently one looks for VTEs, the more one finds them. Hospitals that did more imaging studies looking for VTEs had significantly higher rates of VTE. They also had significantly higher rates of adherence to prophylaxis guidelines.

So if a patient was looking for a hospital with high quality care in the area of venous thromboembolic events, the rate of VTE might be very misleading.

A third study, also from JAMA, looked at the use of universal precautions for all ICU patients in an effort to decrease the incidence of colonization or infection by antibiotic-resistant organisms.

This was a randomized trial in 20 American ICUs, 10 of which involved health care workers donning gowns and gloves for all patient contact and 10 where gown and glove use was required only for patients with established MRSA or VRE colonization or infection. Over 26,000 patients were included.

Although the acquisition of MRSA or VRE declined from baseline in both groups, the difference was not statistically significant. [Digression. This may have been due to the famous "Hawthorne Effect," which is that behavior improves when subjects are aware that they are being watched.]

When only MRSA was looked at, a barely significant difference in acquisition was noted for the ICUs in which all personnel took precautions for all patients.

Other interesting findings were that personnel in the gown and gloves for all patients ICUs entered patient rooms significantly less frequently. The rate of occurrence of the adverse events was not different in the two groups.

To review.

Rapid response teams may not be as useful as once thought. They may lead to increased costs and ICU admissions.

Hospitals with higher rates of VTE may actually be better quality hospitals than those with lower rates.

Observing gown and glove precautions for all patients ICUs does not appear to affect the rate of acquisition of antibiotic-resistant organisms.

Wednesday, May 8, 2013

More problems with patient satisfaction surveys



Here are some updates on the patient satisfaction front.

A paper in last month's JAMA Surgery journal noted that patient satisfaction ratings have very little to do with the quality of care provided by a hospital.

The study analyzed data from 31 hospitals that were participated in patient satisfaction surveys, the CMS Surgical Care Improvement Project (SCIP) and employee safety attitudes questionnaires. 

They found that patient satisfaction did not correlate at all with the rates of hospital compliance with SCIP process measures or the opinions of employees about the culture of the institution for half of the categories questioned.

They concluded that "patient satisfaction may provide information about a hospital's ability to provide good service as a part of the patient experience; however, further study is needed before it is applied widely to surgeons as a quality indicator."

What about patient satisfaction and the quality of medical care provided by doctors? 

This is only an anecdote but it does say volumes about the subject.

A New York area cardiologist admitted to defrauding government and private insurers of $19 million. This was described as the largest healthcare scam by a single physician ever recorded in New York or New Jersey. 

Thousands of patients underwent unnecessary and possibly dangerous tests and treatments. He also employed unlicensed and unqualified personnel who treated patients.

As noted by Dan Diamond, managing editor of the Daily Briefing, the Healthgrades patient satisfaction scores for Dr. Katz all ranged from very good to excellent.

In fact, Dr. Katz has received not one, not two, but three Healthgrades Quality Awards, which are still in evidence on their website. I guess $19 million worth of fraud is not enough to impact one's Healthgrades ratings.

Although this next vignette is about customer satisfaction and has nothing to do with patients, it too illustrates the folly of basing one's opinion on satisfaction scores alone.

According to the Consumerist blog, an subsidiary of the magazine Consumer Reports, certain well-known companies have based employee pay raises and promotions on the results of customer satisfaction surveys.

Apparently, the companies considered anything less than a perfect "5" rating as failure. This resulted in employees telling patrons to either give them a "5" rating or if they could not do so, decline to take the survey. 

I have seen this phenomenon in hospitals too. Staff were coached about what to say to patients to help persuade them to give higher scores. 

I think it's called "gaming the system."

For lots more on the subject, type "patient satisfaction" in my blog's search field (upper right corner).

ADDENDUM 5/9/2013

A friend emailed me this comment: "When I take my car to the dealer for service, they tell me they will be sending me a survey in the mail. Then they tell me if I cant give them all '5's, I shouldn’t fill out the survey, instead I should call them and speak to the manager so they can do better next time"


Wednesday, March 6, 2013

New rules for paying MDs proposed by hospital system in NY


New York City’s Health and Hospitals Corporation (HHC), which runs 11 hospitals in four of the five boroughs of New York, is negotiating a new deal with the union representing some 3,300 salaried physicians. The corporation wants to base MD pay raises on 13 quality indicators.

The New York Times article that broke the story does not list all of the indicators but mentioned the following: how well patients say their doctors communicated with them, rates of readmission within 30 days after discharge for heart failure and pneumonia, how quickly emergency department patients go from triage to beds, whether doctors get to the operating room on time and how quickly patients are discharged.

The union has countered with suggestions that more indicators be used such as “going to community meetings, giving lectures, getting training during work hours, screening patients for obesity, and counseling them to stop smoking.” And they may ask that more doctors and support staff be hired.

As is typical of the doctors' union, they had problems with the plan. They already get paid for giving lectures and training during work hours. Aren't screening patients for obesity and counseling them to stop smoking considered part of a physician's normal work? I do agree that doctors should receive combat or hardship pay for attending community meetings.

Another feature of the plan, which was glossed over in most secondary reports, is that the bonuses “would be given to physicians as a group at each hospital, rather than as individuals, so that even the worst doctor would benefit.” (More on this below)

The Times piece quotes officials from both sides and outside experts who offered opinions ranging from it’s a wonderful new world order to it will never work.

I tried to obtain a list of all 13 performance indicators, but it is nowhere to be found. However, looking at the ones in the Times article may be enough.

Patient assessments of how well their doctors communicated with them is going to be confounded by the fact that there are no private patients and few one-to-one doctor-patient relationships in the HHC system. Add in layers of medical students, physician assistants, residents and fellows combined with a patient population that, in many cases, suffers from a language barrier and may not even know who their doctors are, and it will be difficult to tell just who is a poor communicator.

I have discussed rates of readmission within 30 days after discharge for heart failure and pneumonia in a previous blog. This is a very poor indicator of quality and depends greatly on patient compliance with medications and instructions such as diet and activity.

How quickly emergency department (ED) patients go from triage to beds is a function of the census in the ED. This depends on many variables the MDs can’t control, such as availability of inpatient floor and ICU beds, nurse staffing, promptness in room cleaning, and many other factors.

Whether doctors get to the operating room on time is an interesting issue. As a former chairman of surgery, I have tackled this one in three different hospitals without success. First of all, what does this have to do with quality? Secondly, I truly believe that it will never be solved.

How quickly patients are discharged: Does this mean the time from admission to discharge, or is it the time from when the decision to discharge a patient is made until he actually leaves? If it’s the latter, again there are many forces at work. Does the patient want to go home? Can he get a ride? Is the bed ready at the nursing home or rehab center? If he’s being transferred by ambulette, will it arrive promptly? Is the nurse too busy to do the paperwork? Is the doctor, who may be a resident, too busy to do the paperwork?

The fact that bonuses will be tied to group, not individual, performance dooms the plan to failure. It reminds me of high school when someone threw a spitball and the teacher made everyone stay after school.

What do you think?