Despite heated debate about the pros and cons of online physician ratings, very little systematic work examines the correlation between physicians’ online ratings and their actual medical performance. Using patients’ ratings of physicians at RateMDs website and the Florida Hospital Discharge data, we investigate whether online ratings reflect physicians’ medical skill by means of a two-stage model that takes into account patients’ ratings-based selection of cardiac surgeons. Estimation results suggest that five-star surgeons perform significantly better and are more likely to be selected by sicker patients than lower-rated surgeons. Our findings suggest that we can trust online physician reviews, at least of cardiac surgeons.
You won't be surprised to learn that I don't believe it. As is my custom, I decided to read the entire paper the full text of which can be found here. At 37 pages, the raw manuscript is rather lengthy. As a public service, I waded through it.
The authors, non-MD faculty from the William E. Simon Graduate School of Business Administration at the University of Rochester, in New York, combed the ratings for Florida cardiac surgeons on the website RateMDs.com and classified surgeons into three categories—five-star surgeons, non-five-star surgeons, and those with no ratings at all.
They looked at 799 quarterly opportunities for ratings over a 9-year period and found that 21% of surgeons had an average of 1.9 online ratings. The 79% of surgeons who did not have an online rating performed 79% of the total surgeries in 2012, the year that the authors analyzed for patient results.
The five-star surgeons had a mean of 1.8 reviews each, and only 10% had more than 2 reviews.
The average mortality rate for coronary artery bypass grafting (CABG) among the Florida cardiac surgeons was 1.8% in 2012. The five-star surgeons with multiple reviews had the highest mortality rates at 3.3%.
I could find no evidence that patient mortality rates were adjusted for risk. But a lot of statistical manipulations took place. It's all explained by this simple equation—one of many.
And this, "Patients with private insurance are less likely to select the surgeons without ratings than patients with Medicare. We suspect that patients with private insurance have to use search engines to figure out whether a surgeon is within the network that an insurance plan covers, while government patients enjoy a large physician network." I question that assumption. My experience is that patients with Medicare sometimes have problems finding anyone to care for them, let alone the best surgeons.
It turns out that half of the five-star surgeons had only one review. In one iteration of the study model, five-star surgeons with multiple reviews had higher mortality rates than those with only one review, but then they also say, "One surprising finding is that five-star surgeons with a single review show no statistical difference in performance from those with multiple reviews."
Are you as confused as I?
The paper makes no mention of the possibility that some of the online ratings could be fake. Recent articles [here and here] suggest that one-fifth to one-third of such reviews are phony.
You can manipulate the statistics all you want, but you won't convince me that one or two or even 20 online ratings are valid or useful in choosing a surgeon.