Wednesday, February 9, 2011

Hospital and Doctor Ratings: Junk Science? No, No Science at All

Pop statistician/philosopher Malcolm Gladwell takes down the ridiculous college rating process in this week’s New Yorker magazine. While pointing out the many problems with the way U.S. News goes about rankings colleges, he mentions a 2010 study published in Archives of Internal Medicine that similarly debunks the rankings of hospitals. That study found that reputation alone is the key to a hospital receiving a high ranking and the ranking has nothing to do with quality. This is true of the college ranking system too. A key point regarding the power of reputation is that as the rankings are publicized every year, the top hospitals and colleges become even more prestigious, which of course, enhances their reputations further.

The college ranking methodology takes into account a number of factors and weighs them in an arbitrary way. Gladwell feels that cost is not given enough emphasis as expensive private universities dominate the top of the list. When it comes to hospital rankings, U.S. News surveys only 250 physicians in each specialty and asks them to name the five best hospitals in their field. It is impossible for a hospital without an established national reputation to ever be ranked highly.

This whole charade is carried to an almost comical extreme by the folks at Castle Connolly, who bring you “America’s Top Doctors” and regional offshoots of the same concept. These ratings are eagerly awaited ever year and are the subject of lengthy articles in magazines. Advertising revenue is generated as hospitals tout their MDs who have been fortunate enough to have made the Castle Connolly list.

Here is a little secret. Castle Connolly sends questionnaires to hospital department chairs and asks them to name the best doctors in not only their fields but every medical specialty. No other criteria are used. Having been a department chairman for over 23 years, I can tell you that it is impossible for me to know anything about the quality of the work of any physician at another hospital or even sometimes another specialty in my own hospital. The vote is strictly by reputation. Many fine doctors make the list because reputations are very often correct. But not always. A surgeon can be well-known for research or involvement in organizations, but she may not necessarily be the best clinician around.

So what is a prospective patient to do? I’ve already blogged about the shortcomings of Healthgrades [here and here] and the CMS [Medicare] Physician Compare website is not ready for prime time. For now, I suggest you ask friends who may have had illnesses similar to yours for recommendations. Or you’ll have to trust the doctor who refers you to a specialist and your instincts when you meet her.

7 comments:

busysynch mac said...

I risk appearing to be an old frump, but I mourn the passing of the common gender. I realize that a physician today is perforce more likely to be a female in the US, at least during daylight hours. Still the pharma commercial commonplace, "I asked my doctor, and she said..", strikes me as closely related to the he/she, his/her of the sixties.

Oh well. Never mind. This rant is just for you. No use kicking the hornets' nest.

Skeptical Scalpel said...

I feel your pain and love the accurate phrase "...at least during daylight hours." I used the pronoun "her" on purpose just to tweak my readers. Unfortunately, "them" is just wrong and there is no singular pronoun that is gender-neutral. I hate "his/her."

Vickie@Demand_Euphoria said...

I encourage you both to look at section 4.2.5 in the following wikipedia entry: http://en.wikipedia.org/wiki/Gender-neutral_pronoun

It seems you actually have many options! Which one is your favorite?

Anonymous said...

This is a follow-up to the Annals of Internal Medicine study. If reputation score in USNWR drives the overall ranking, does it at least reflect thought leadership within a field? Not so much: http://www.europeanurology.com/article/S0302-2838(11)01131-6/fulltext

Skeptical Scalpel said...

Anon, thanks for the interesting link. It's a novel way to rate a program. But there's more to it than just academics, isn't there?

Anonymous said...

SS, indeed..

The effort was motivated by reviewing the annual rankings of Urology departments by the U.S. News and World Report (USNWR) Magazine. As you know the rankings are legitimized by heavy hospital advertising, but as you note in your piece above the methodology used for generating the lists is questionable at best. In fact, of the three “quality domains” the magazine uses to generate hospital rankings – structure, process, and outcomes – the process domain has had a tremendous dominating effect on the final rankings.

This process domain overwhelmingly measures the hospital’s reputation score [ very nicely shown in the Sehgal Ann Int Med 2010 manuscript you mention]. As such, the rankings in U.S. News & World Report essentially reflects a hospital’s supposed national reputation -- a characteristic that is only further reinforced by the magazine’s rankings. Indeed, nearly perfect correlation exists between the reputation score and the final hospital ranking for all specialties [Sehgal 2010]. For example, in urology 100% of top 10 hospitals and 95% of top 20 hospitals are ordered identically by reputation score and overall ranking.

Since the reputation score has been the lynchpin of the U.S. News and World Report Rankings one would assume that rigorous methodology underlies this metric. Yet, when one drills down into the details of how the reputation score is obtained, it becomes apparent that the measure is entirely subjective and is largely a reflection of historical biases. In fact, the metric is derived from asking 250 physicians to list 5 “best” hospitals with only 40 to 50% of those surveyed responding.

As such, a department’s recent academic contribution reflects the thought leadership of existing faculty and, thus, arguably better measures that department’s prominence than a subjective historical reputation score.

The way I see it, we in the medical community can either accept the assessments of these third-party rankings or attempt to develop new metrics that are objective, reproducible, current, and are void of subjective bias. The manuscript we put together is just one step in that direction.

Clearly, thought leadership is not a perfect surrogate for clinical excellence, and the proposed metric should be a component of a larger more comprehensive ranking system that incorporates other objective measures such as standardized reporting of specific patient outcomes and morbidity, adherence to practice guidelines and clinical pathways, process performance, and peer reviewed departmental funding (just to name a few).

Enjoy your blogs/tweets very much.

Best,

@uretericbud

Skeptical Scalpel said...

@uretericbud, thanks for the interesting comment. I agree there has to be a better way to rate hospitals and doctors. Unfortunately, I am not sure how that can be done to everyone's satisfaction.

Post a Comment

Note: Only a member of this blog may post a comment.