According to the site, "Any hospital or consultant [attending surgeon in the UK] identified as an outlier will be investigated and action taken to improve data quality and/or patient care."
After cardiac surgery outcomes data were made public in New York, some interesting unexpected consequences were noted.
Surgeons and hospitals resorted to "gaming the system" by declining to operate on patients who were high-risk and tinkering with patient charts to make those they did operate on seem sicker. This can be done by scouring the charts for all co-morbidities and making sure none are overlooked when they are coded. An article from New York Magazine explains it in more detail.
Interpreting outcomes data can be tricky.
In a post three years ago about a report that nine Maryland hospitals had higher-than-average complication rates, I pointed out that whenever you have averages, some hospitals are going to be worse than average unless all hospitals perform exactly the same way or, like medical students, are all above average.
A much more sophisticated way of looking at this subject appeared in a fascinating 2010 BBC News piece by Michael Blastland, who is the Nate Silver of England [or maybe Nate Silver is the Michael Blastland of the US], called "Can chance make you a killer?"
Blastland set up a statistical chance calculator for a hypothetical set of 100 hospitals or 100 surgeons performing 100 operations each. The model assumes that every patient has the same chance of dying and that every surgeon is equally competent. The standard is that a mortality rate 60% worse than the norm set by the government for any hospital or surgeon is not acceptable.
You are assigned one hospital. Using a slider, you may choose an operative mortality rate anywhere from 1% to 15%. After you do this a number of times and recalculate for each mortality rate, you will notice that the number of unacceptably performing hospitals or surgeons changes randomly for each percent mortality and your hospital may appear in the underperforming group strictly by chance alone.
The whole concept is explained in more detail on the site. I encourage you to try it for yourself. The link is here.
So it may be difficult for the NHS to separate the true outliers from the unlucky surgeons who happened to fall outside the established norms.
What do you think about this?
2 comments:
Hmm, I agree entirely that quality is nigh-impossible to measure. But I recall that the Michigan Bariatric Surgery Video Review study showed clearly that there are differences between surgeons in surgical quality and probably some difference in measures of morbidity/mortality.
So can these measures give us any information, at all? In other words, will the signal (the true measure of surgical skill and quality) rise above the noise of random variation?
Regardless of the answer to that question however, I think solid evidence exists that the best way to improve would probably be to keep this data private, and let hospitals and doctors drive themselves to improve- as opposed to high-pressure incentives that have backfired so many times and resulted in more harm than good.
Respectfully,
Vamsi Aribindi
I agree that it's difficult to measure quality and the Michigan bariatric study was a good step toward doing that although it is very time consuming and somewhat subjective.
Back in November, I discussed some of the difficulties associated with video review for all surgeons. here's the link http://skepticalscalpel.blogspot.com/2013/11/should-all-surgeons-have-video.html
Post a Comment
Note: Only a member of this blog may post a comment.