Pages

Thursday, October 30, 2014

How to rank surgical residency programs

In September, Doximity, a closed online community of over 300,000 physicians, released its ratings of residency programs in nearly every specialty. Many, including me, took issue with the methodology. Emergency medicine societies met with Doximity's co-founder over the issue and echoed some of the comments I had made about the lack of objectivity and emphasis on reputation.

I wonder if it is even possible to develop a set of valid criteria to rate residency programs. Every one I can think of is open to question. Let's take a look at some of them.

Reputation is an unavoidable component in any rating system. Unfortunately, it is rarely based on personal knowledge of any program because there is no way for anyone not directly involved with a program to assess its quality. Reputation is built on history, but all programs have turnover of chairs and faculty. Just as in sports, maintaining a dynasty over many years can sometimes be difficult. Deciding how much weight should be given to reputation is also problematic.

The schools that residents come from might be indicative of a program's quality, but university-based residencies tend to attract applicants from better medical schools. The other issue is who is to say which schools are the best?

Faculty and resident research is easy to measure but may be irrelevant when trying to answer the question of which programs produce the best clinical surgeons. Since professors tend to move from place to place, the current faculty may not be around for the entire 5 years of a surgery resident's training.

The number of residents who obtain subspecialty fellowships and where those fellowships are might be worthwhile, but would penalize programs that attract candidates who may be exceptional but are happy to become mere general surgeons.

Resident case loads including volume and breadth of experience would be very useful. However, these numbers have to be self-reported by programs. Self-reported data are often unreliable. Here are some examples why.

For several years, M.D. Anderson has been number one on the list of cancer hospitals as compiled by US News. It turns out that for 7 of those years, the hospital was counting all patients who were admitted through its emergency department as transfers and therefore not included in mortality figures. This resulted in the exclusion of 40% of M.D. Anderson's admissions, many of whom were likely the sickest patients.

The number and types of cases done by residents in a program have always been self-reported. The Residency Review Committee for Surgery and The American Board of Surgery have no way of independently verifying the number of cases done by residents, the level of resident participation in any specific case, or whether the minimum numbers for certain complex cases have truly been met.

So where does that leave us?

I'm not sure. I am interested in hearing what you have to say about how residency programs can be ranked.

Friday, October 24, 2014

Please stop this: "There are more ___ than Ebola victims in the US"

I get it. Can we please stop comparing the number of Ebola victims in the United States to all sorts of irrelevant things? PS: It's not that funny either.

The following are directly copied from recent tweets. Links have been removed for your protection.


There are more Saudi Princes than Ebola victims

Kim Kardashian has had more husbands than Ebola victims in the US

More Americans have been dumped by Taylor Swift than have died from Ebola

Fun Fact: More #kids die annually due to #faith healing than #Ebola.

FACT: Katie Price has claimed more victims than Ebola.

NYC traffic. another thing that's much more dangerous than #Ebola, courtesy of @bobkolker via @intelligencer

There are more people in this tram than ebola victims in America.

I've lost more followers than US Ebola victims [I didn't tweet this or any of these other tweets.]

@lbftaylor fewer #ebola victims in US than drunk Palins in a #PalinBrawl.

@pbolt @robertjbennett Also, there are more ex-wives of Larry King than there are ebola victims int he US.

Rush Limbaugh has more ex-wives than USA has Ebola victims!

@xeni Menudo has had more members than 3x the number of American Ebola victims...

Put #ebola in the context of vaccination preventable dz: 118,000 children < 5 yrs old die from measles per year

@Tiffuhkneexoxo @LeeTRBL more dc team quarterbacks have played this year than there are US ebola victims

Rest assured, there will always be more American guns in Africa than Ebola victims. Everything is fine. Relax

As #Enterovirus spreads faster x country & kills more than #Ebola, sure victims' parents must b sad congress isn't demanding an ED68 czar.

We are all far more likely 2 be victims of identity theft than #Ebola. Obama has a plan to fix that

Americans spend more money on Halloween costumes for their pets than the UN spends on helping Ebola victims and fighting ISIS combined.

@mikebarnicle 9900 gunshot victims since Newtown, much scarier than Ebola.

So FYI... More people die from the #flu than #ebola .

Fear hospital infections not Ebola. 1 in 25 patients are infected. 75,000 die yearly.

Every day in America around 100 people lose their lives to mostly preventable car crashes. #Ebola

There are more experts on CNN right now talking about Ebola in America than people with ebola in America.

Wednesday, October 22, 2014

1 in 20 Americans are misdiagnosed every year

Really?

A paper published in April found that about 12 million Americans, or 5% of adults in this country, are being misdiagnosed every year. This news exploded all over Twitter. Anxious reports from media outlets such as NBC News, CBS News, the Boston Globe, and others fanned the flames.

The paper involves a fair amount of extrapolation and estimation reminiscent of the "440,000 deaths per year caused by medical error" study from last year.

Data from the authors' prior published works involving 81,000 patients and 212,000 doctor visits yielded about 1600 records for analysis.

A misdiagnosis was determined by either an unplanned hospitalization (trigger 1) or a primary care physician revisit within 14 days of an index visit (trigger 2).

A quote from the paper [Emphasis added] : For trigger 1, 141 errors were found in 674 visits reviewed, yielding an error rate of 20.9%. Extrapolating to all 1086 trigger 1 visits yielded an estimate of 227.2 errors. For trigger 2, 36 errors were found in 669 visits reviewed, yielding an error rate of 5.4%. Extrapolating to all 14,777 trigger 2 visits yielded an estimate of 795.2 errors. Finally, for the control visits, 13 errors were found in 614 visits reviewed, yielding an error rate of 2.1%. Extrapolating to all 193,810 control visits yielded an estimate of 4,103.5 errors. Thus, we estimated that 5126 errors would have occurred across the three groups. We then divided this figure by the number of unique primary care patients in the initial cohort (81,483) and arrived at an estimated error rate of 6.29%. Because approximately 80.5% of US adults seek outpatient care annually, the same rate when applied to all US adults gives an estimate of 5.06%.

Thursday, October 16, 2014

Lactated ringers and hyperkalemia: A blog post meriting academic credit

In a recent post, I suggested that physicians should receive academic recognition for certain social media activities. "Myth-busting: Lactated Ringers is safe in hyperkalemia, and is superior to NS," written by Dr. Josh Farkas (@PulmCrit), is a great example of why that is true.

Using only about 1250 words and 6 references, he explains that infusing lactated ringers not only does not cause harm, it is actually superior to normal saline in patients with hyperkalemia, metabolic acidosis, and renal failure.

I highly recommend reading the post which should take you only a few minutes. If you're too lazy to do that, here's a summary.

Dr. Farkas found no evidence that lactated ringers cause or worsens hyperkalemia. In fact, he presents some solid evidence to the contrary.

If the serum potassium is 6 mEq/L, a liter of lactated ringers, which contains 4 mEq/L of potassium, will actually lower the potassium level.

Because almost all potassium (~98%) in the body is intracellular, the infusion of any fluid with a normal potassium content will result in prompt redistribution of potassium into the cells negating any of the almost negligible effect of the potassium infusion.

A normal saline infusion is acidic, resulting in potassium shifting out of cells and increasing the serum potassium level. Lactated ringers, containing the equivalent of 28 mEq/L of bicarbonate, does not cause acidosis.

There's a lot more in the post. Read it.

This issue is arguably the most misunderstood fluid and electrolyte concept in all of medicine.

In my opinion, the post should be displayed on the bulletin boards of intensive care units, emergency departments, and inpatient floors of every hospital in the world and should be read by every resident or attending physician who writes orders for IV fluids.

Disclosure: I've never been a fan of normal saline. Two years ago I wrote a post that discussed two papers showing that because of its negative effects on renal function, normal saline was inferior to lactated ringers in critically ill patients.

Wednesday, October 15, 2014

Readmissions: Sometimes it's the patients

My Twitter friend Dan Diamond (@ddiamond) posted a picture of a slide that said a hospitalized patient was taught to inject insulin using an orange to practice on. When he was readmitted to the hospital with a very high blood sugar, it turned out that instead of injecting himself at home, the patient was injecting his insulin dose into an orange, and then eating it.

We've all heard stories about patients who took suppositories by mouth instead of the way they were intended.

Since doctors get blamed for just about everything, some would say that patients who take suppositories by mouth or eat an orange filled with insulin do so because they were not properly taught by their doctors (or nurses).

I have blogged before about the problem of who is at fault if patients do not follow up. Although I feel that much of the time it's the patient who decides not to return for follow-up, it seems prevailing sentiment and possibly even the courts say it's the physician who should be held responsible.

But how do you explain this? A study in Heart, a BMJ journal, found that of 208 hypertensive patients referred to a clinic for suboptimal blood pressure control, 52 (25%) were either completely or partially non-adherent [aka non-compliant] with their antihypertensive medications as determined by urine mass spectrometry.

The authors concluded that urine testing for medications or their metabolites would help doctors avoid ordering unnecessary investigations for patients whose blood pressures were not well-controlled.

The reasons for patient non-adherence were not mentioned. Could all 52 patients not have been told about the importance of taking their medications? I doubt it.

You might think the 15% who were partially non-adherent may have forgotten to take the drugs occasionally, but it turns out that most of those in this group took adequate doses of most of other their prescribed medications. This suggests that they selectively omitted some doses of one or more drugs.

The only explanation I can fathom for the 10% who had no traces of any BP meds in their urine is that they just said "to hell with it" and didn't take their meds at all.

I know someone with type 2 diabetes who doesn't watch her weight or what she eats and doesn't check her blood sugars. She says, "You've got to die of something. I'd rather live my life the way I want to."

Is it that doctors and nurses aren't educating the patients or are the patients at fault?

The answer to this question has important implications because of the newly established financial penalties for hospitals with high readmission rates.

Older methods that may improve adherence are tracking prescription refills and having pharmacists or nurses specifically assigned to explain medications to patients in detail.

Here's something that might help.

A recent meta-analysis showed that adherence to HIV/AIDS antiretroviral therapy was modestly improved when patients were sent reminders to take their medications by text message. Those who were more adherent had lower viral loads and better CD4 counts.

Of course, such an intervention assumes that patients have mobile phones or pagers capable of receiving texts, will check for messages, and will act upon the advice. Compared to patients with HIV/AIDS, those with hypertension might tend to be much older and possibly not as technologically savvy.

So what is the solution? I don't know, but sometimes the problem is the patients.

Saturday, October 11, 2014

Is student test performance impaired by distracting electronic devices?

After listening to a lecture, third-year students at the Harvard School of Dental Medicine were surveyed about distractions by electronic devices and given a 12-question quiz. Although 65% of the students admitted to having been distracted by emails, Facebook, and/or texting during the lecture, distracted students had an average score of 9.85 correct compared to 10.444 students who said they weren't distracted. The difference was not significant, p = 0.652.

In their conclusion they authors said, "Those who were distracted during the lecture performed similarly in the post-lecture test to the non-distracted group."

The full text of the paper is available online. As an exercise, you may want to take a look at the paper and critique it yourself before reading my review. It will only take you a few minutes.

As you consider any research paper, you should ask yourself a number of questions such as are the journal and authors credible, were the methods appropriate, were there enough subjects, were the conclusions supported by the data, and do I believe the study?

Tuesday, October 7, 2014

Reaction to post on academia and social media

"Should social media accomplishments be recognized by academia?" a post of mine from October 4th, generated some lively discussion on Twitter.

Here are a few of the more interesting responses:

@ashishkjha Important question from @Skepticscalpel Should academia value impact on social media? Yes. And it's coming. Slowly.

@MartinSGaynor Science comes 1st, 2nd, 3rd.. MT @ashishkjha Important Q: @Skepticscalpel Shld academia value impact on social media?

@ashishkjha agree how to measure impact a key question. Eye balls can't be enough. But too important a question to ignore.

‏@DoctorTennyson Yes-I think social media has a role for #publichealth, #education, and fosters collaboration. More than obscure journals

@NirajGusani still you add value to your dept -how do/should they measure it?

‏@gorskon Heck, at @ScienceBasedMed, we get 1M page views a month, but I get no credit.

@gorskon I agree though. For the most part, social media harms, not helps, academic career.

Saturday, October 4, 2014

Should social media accomplishments be recognized by academia?

In August, I posted this: "A paper of mine was published. Did anyone read it?"

A recent comment on it raised an interesting point. Dr. Christian Sinclair [@ctsinclair] said that a website he is helping to run called "Pallimed" has received almost 2 million views since 2005.

He then made the following calculation:

Two million views with an average of 1:30 minutes on a page = 3 million minutes = 50,000 hours = 2,083 days = 5.7 years of 24/7/365 informal learning on hospice and palliative care topics.

He said that this type of communication counts for nothing regarding academic advancement and added that writing another paper and having it published in a journal no one reads or a chapter in an expensive book no one will buy is considered worthwhile.

This reminded me of something I have talked about in recent presentations. The first laparoscopic cholecystectomy done in the United States took place in 1988. The procedure rapidly became popular due to its obvious benefits over traditional open surgery—smaller scars, shorter hospitalizations, quicker returns to normal activity.