Pages

Wednesday, August 12, 2015

Why in-hospital deaths are not a good quality measure

You may be tired of hearing about the Surgeon Scorecard—the surgeon rating system that was recently released by an organization called ProPublica. Like many others, I have pointed out some flaws in it. You can read my previous posts here and here.

I had decided to stop commenting about it because enough is enough, but a recent paper in the BMJ raises a question about one of the criteria ProPublica used to formulate its ratings.

ProPublica defined complications 1) as any patient readmission within 30 days and 2) "any patient deaths during the initial surgical stay."

The authors of the BMJ paper randomly selected 100 records of patients who died at each of 34 hospitals in the United Kingdom. The 3400 records were reviewed by experts to determine whether a death could have been avoided if the quality of care had been better.

The number of patient records in which a death was at least 50% likely to have been avoidable was 123 or 3.6%.

There was a very weak association between the number of preventable deaths and the overall number of deaths occurring at each hospital. By two measures of overall hospital deaths, the hospital standardized mortality ratio and the summary hospital level mortality indicator, the correlation coefficient between avoidable deaths and all deaths was 0.3, not statistically significant.

From the paper: "The absence of even a moderately strong association is a reflection of the small proportion of deaths (3.6%) judged likely to be avoidable and of the relatively small variation in avoidable death proportions between trusts [hospitals]. This confirms what others have demonstrated theoretically—that is, no matter how large the study the signal (avoidable deaths) to noise (all deaths) ratio means that detection of significant differences between trusts is unlikely."

The Surgeon Scorecard was derived from administrative data. No individual analysis of patient deaths was undertaken. According to a ProPublica article discussing some key questions about their methodology, "As for deaths, we took a conservative approach and only included those that occurred in the hospital within the initial stay."

Maybe that wasn't such a conservative approach after all.

And maybe we need to rethink that 2013 paper claiming that medical error caused up to 440,000 deaths per year.

Wednesday, August 5, 2015

Some venous thromboembolic events can’t be prevented even with optimal care

I have written several posts about how I get things right before others see the light, but none better than one from three years ago pointing out that some of the Centers for Medicare and Medicaid Services (CMS) "never events" can't really be completely prevented and therefore should not be considered "never events."

One specific "never event" I questioned was hospital acquired venous thromboembolic (VTE) disease which encompasses deep venous thrombosis (DVT) and/or pulmonary embolism (PE). I wrote "I am unaware of any DVT study in which no patients in the experimental arm developed DVTs or PEs. Patients will develop DVT or PE even with the best evidence-based care."

Along comes a brief research letter published last month in JAMA Surgery by a group from Johns Hopkins led by surgeon Elliott R. Haut.

Of 92 patients in their institution who had VTEs in a single year, 43 (47%) had received defect-free care. That is, each of those patients received all doses of risk-appropriate pharmacological prophylaxis ordered for the entire hospitalization.

To put it another way, VTEs for those 43 patients were not preventable. There would be no way to do a quality improvement project for a group of patients who received the right prophylaxis throughout their hospital stays and still got VTEs.

The Joint Commission/CMS criterion states that a hospital is in compliance with VTE prophylaxis if a patient receives one dose of an appropriate drug within 24 hours of admission. The Hopkins study showed that of the 49 patients (53%) whose care was suboptimal, 36 (73%) missed at least one dose of prophylaxis that was correctly ordered. Other studies have shown that missing even one dose of prophylaxis at any time during a hospitalization increases the risk of VTE.

So about half of VTEs are not preventable even with perfect adherence to the prophylaxis protocol, and the standard for compliance established by the JC/CMS is inadequate to judge the quality of an institution's performance for VTE prevention.

The study shows that 1) a lot of good information can be delivered in a two-page paper, 2) JC/CMS criteria for compliance with VTE prophylaxis need to be revisited, and 3) VTE should be removed from the list of "never events.”

Monday, August 3, 2015

A high school student has questions about a medical career and pathology vs. surgery

A female high school student asks about pathology, surgery, and medicine in general. [Email edited for length.] See if you agree with my answers.

The field I am most interested in is pathology. I have a very logical mind and would enjoy being able to solve the complex puzzle of disease. I would also like the somewhat flexible hours compared to other more intensive specialties. However, I do have some qualms.

I'm also interested in general surgery. I would love to learn how to perform all the different types of surgeries that surgeons perform. If I were to be a pathologist, would it be "knife-free"? Pathology really intrigues me, but participating in the occasional surgery sounds like it would be extremely interesting and full of learning opportunities.


There is some knife wielding in pathology. Specimens must be properly cut, and there is the occasional autopsy. However, it's definitely not surgery.

What does a pathologist really do? I've looked at various descriptions online, and none of them seem to be very specific. What would a typical day look like for a pathology resident? I was also wondering what types of skills pathologists are taught?

Friday, July 31, 2015

So you got into medical school… Now what?

"So you got into medical school… Now what?" is a book written by Dr. Daniel R. Paull, a recent med school graduate. His aim was to inform newly matriculating medical students about what to expect and how to survive. For the most part, he succeeds.

The first four chapters are a bit on the dry side because Dr. Paull tries to simplify such complex things as how to live with anxiety in the first two years of medical school. He also spends a bit too much time on how to study. I agree with him that studying in medical school differs from studying in college, and that sticking to a schedule is a sensible way to organize time. However, I think that most people will figure out what works best for them on their own.

The book picks up steam starting with Chapter 5 on how to prepare for USMLE Step 1. I get a lot of questions about USMLE, and with no recent experience, I sometimes find them difficult to answer. Dr. Paull takes care of that quite nicely.

The remaining chapters offer plenty of practical advice on transitioning to the clinical years, clerkships and how to arrange them, studying for the two parts of USMLE Step 2, the fourth year of medical school, and finally how to arrange and succeed in the all-important residency interview process.

Regarding clerkships, Dr. Paull wisely recommends that students ask their residents and attendings for feedback during the rotation instead of waiting until the end to find out that their performance was not up to par. He gives some specifics like asking for feedback about H&P's and presentations and how to improve on them.

The pros and cons of away rotations are discussed in some detail and should help any student who is conflicted about whether to do one or not.

He explains how the National Resident Matching Program works and offers some hints about ranking programs which echo similar comments I have made on this blog.

The book is in trade paperback format and inexpensive at a list price of $19.95. It's also available in a Kindle edition.

My only other criticism of the book is that Dr. Paull relies a little too much on an alarm clock about to go off or going off as a way to introduce a challenge he is trying to help students deal with.

Why should we believe anything Dr. Paull says? Well, he has a bachelor of science degree in physics from New York University, graduated from the University of Miami School of Medicine, and is currently an orthopedic resident at the University of Toledo in Ohio. In case you hadn't heard, orthopedic residencies are highly competitive.

Also, I have read the book myself and think most med students will find value in it.

Disclosure: I received a complimentary copy of the book from the author.

Tuesday, July 28, 2015

Is do-it-yourself surgery the future of medicine?


Once in a while, I read something on the Internet that is so silly, so outrageous that I can't help myself. I must speak up.

Such a situation occurred a few days ago when I came across an article called "DIY [do it yourself] Surgery: The Future of Medicine?" on a website called FastCompany.

An "interaction designer" named Frank Kolkman has created a robotic Open Surgery Machine which he proposes could fill in need when "middle-class" US citizens who have no access to healthcare require surgery.

My favorite line from the article is an explanation of what Mr. Kolkman's robot can do. "It's designed to perform simple surgeries like laparoscopic surgery in which three or more small keyhole incisions are made to allow a surgeon to operate inside a part of the patient's body after inflating it with CO2."

He proposes that "appendectomies, prostate operations, hysterectomies, and also colon and general inspections" could be done.

Friday, July 24, 2015

The Surgeon Scorecard: My analysis

I've got nothing against ProPublica. If a valid way to rate surgeons is ever discovered, I would support it completely. However, ProPublica's Surgeon Scorecard is not the answer.

I keep hearing its defenders say, "Some data is better than no data at all." I disagree strongly with that. To me, bad data is worse than no data at all. People with much more statistical sophistication than I have pointed out the flaws in the scorecard.

Digression: Having written many posts about statistics, I can tell you that the mere mention of the word drives readers away about as fast as if you were to yell "Fire" in a crowded theater.

I want to focus on a different area. The scorecard has created a lot of chatter on Twitter, and just about everyone I know has blogged about it.

This reminds me of a couple of posts I wrote back in 2011. [Links here and here.] I pointed out that Twitter might not be as important as those of us who use it think it is.

While we were busy arguing about the merits of the scorecard on Twitter, I'm not so sure what the general public was doing.

For example, ProPublica says the Surgeon Scorecard has had over 1 million visitors since its launch. That sounds like a lot until you consider that the current population of the United States is estimated at 321 million. So 1 million people would be 0.3%. We do not know how many of those 1 million were unique visitors. It could be that many of them were doctors looking for their own statistics and bloggers looking for ideas.

That the public may not care was reinforced by a rather tepid response to the ProPublica AMA (Ask Me Anything) on Reddit today.

By 1:00 PM EDT, which was two hours into the AMA, there were 80 comments, 31 of which were by ProPublica staff or the spine surgeon who had consulted on the scorecard's methods.

Just to give you some perspective, an AMA last year by a guy with two penises drew 17,134 comments.

Because the demographic is skewed toward younger people, perhaps Reddit may not have been the right venue. Although Reddit boasts 169 million unique visitors per month, the most recent figures show that 33% of the Reddit users are mostly men between 18 and 49 years old. Those under 18 are not counted but represent "a substantial percentage of Reddit users."

My two favorite questions asked of ProPublica were "How can I tell if my doctor is capable of making an error?" and "Do you fix the leg which is broken completely?" [Did the question refer to a leg that was completely broken, or did it mean should the leg be completely fixed?]

What have we learned here? It's hard to say.

If you want to read a measured critique of the scorecard, go to Dr. John Mandrola's piece on Medscape.

Thursday, July 23, 2015

Take lecture notes on a laptop computer or use old fashioned longhand?

A paper published last year in Psychological Science suggests that taking notes in longhand is the better choice.

The authors, from Princeton University and UCLA, performed three studies on college students. They found that even if multitasking and distractions were eliminated, “students who took notes on laptops performed worse on conceptual questions than students who took notes longhand.”

Previous research has shown that note taking enhances learning by both providing external storage of information for later review and “encoding,” that is, processing information and reframing it in one’s own words.

Although significantly more notes were taken by laptop users, they tended to be more like transcriptionists instead of thinking about and summarizing what they heard.