Monday, July 30, 2012

Distractions in the OR

More research that we didn’t need (with needlessly inflammatory headlines)

Last week was not a good one for the advancement of surgical research. I’ve already blogged about two studies that attempted to mystify the teaching of laparoscopic surgery.

On Thursday, MedPage Today devoted over 700 words to reporting on a study from Archives of Surgery called “Realistic distractions and interruptions that impair simulated surgical performance by novice surgeons.” It was an interesting experiment which looked at the effects of common types of operating room distractions, like cell phone ringing, questions about floor patients and a dropped metal tray, on the performance of simulated laparoscopic surgery. The subjects were all second and third year residents.

The performance of the 18 residents was tested with and without the interruptions. When distractions were present, 8 major errors in the simulated surgery occurred vs. only 1 major error in the non-distracted series, p = 0.02.

An unexpected finding was that all of the major errors occurred in the afternoon, which was also statistically significant. Fatigue, a very subjective characteristic, did not seem to play a role, although one could speculate that residents might be more tired in the afternoon.

The authors mentioned several limitations of the study including that it wasn’t carried out in a real operating room and only novice surgeons were tested.

The research was cleverly done and obviously could not have taken place in a real OR with a patient. It accomplished the number one goal of any study, which is to get published.

However, I take issue with the way it was reported. Here’s the MedPage headline: “Distractions in OR Make Errors More Likely.” That is grossly misleading since it fails to mention that the errors were made by junior residents, who were the only subjects of the experiment. This is important because even in my day, JUNIOR RESIDENTS NEVER OPERATED WITHOUT SUPERVISION!

Not until the sixth paragraph does the MedPage report point out the important disclaimers that “the authors said the results should not be interpreted as representative of operating-room (OR) experience in general” and that “these results should not be used to infer that almost half of all surgical procedures with distractions and interruptions are expected to have major surgical errors."

The paper was also covered by Medwire news with the headline “Surgical distractions could have disastrous consequences.” The story did not explain that second and third year residents do not operate by themselves.

Personally, I do not play music in the OR. During a difficult case, I am so focused on what I am doing that I don’t know who has come in or gone out of the room, nor do I hear extraneous conversations.

Distractions can be a problem. But this paper and the way it was reported by medical news agencies do nothing but heighten the anxiety of an already skittish public when it comes to fear of medical errors. I don’t see how proving that novice surgeons can be easily distracted adds anything toward the enlightenment of mankind.

Friday, July 27, 2012

Why do malpractice claims take so long to be resolved?

Two recent papers have prompted me to ask myself the question, “Why do malpractice suits take so long to be resolved?”

One was by a group from Harvard, USC and the RAND Corporation. They looked at more than 10,000 closed malpractice cases for all specialties and found the average length of time it took to close a case was 19 months, with litigated claims taking a little over twice as long as non-litigated claims, 25.1 vs. 11.6 months respectively. Claims that were resolved at trial took much longer averaging 39.0 months for defendant verdicts and 43.5 months for plaintiff verdicts.

The second study was by a group headed by a surgeon based at a Johns Hopkins affiliated hospital. They reviewed 187 closed surgery claims at four university hospitals in New York. Using a different method of calculating the length of time, they noted a mean time until resolution of all claims of 4.5 years. Cases closed with no payment took 3.9 years, while those won or lost at trial took about 5 years.

I’ll give you a minute to see if you can guess why it takes so long to resolve these claims.

Here’s my theory. I think it might have something to do with the involvement of lawyers. We all know that plaintiffs’ attorneys get a large percentage of the take from any case settled with payment or plaintiff’s verdict with damages.

But defending cases can be expensive too. Since defense lawyers are paid according to their billable hours, it seems to me that it is in their best interest to make a claim last as long as possible. Have you ever been deposed or read a deposition? Lawyers object to every other question, which accomplishes nothing. The question still has to be answered. They also ask the same questions over and over leading to the old “Objection, asked and answered” response. Often the questions seem to have nothing to do with the alleged negligence.

Let’s do some math.

A paper that appeared in the journal Health Affairs in 2005 addressed the legal costs associated with malpractice claims. The study found that “legal costs average $27,000 per claim in the United States, which adds approximately $1.4 billion in costs to the $4.4 billion paid in settlements and judgments. The costs of underwriting insurance against malpractice claims are estimated at an additional 12 percent, or $700 million. The cost of defending U.S. malpractice claims, including awards, legal costs, and underwriting costs, was an estimated $6.5 billion in 2001—0.46 percent of total health spending.”

Yes, it’s less than 1% of total healthcare spending. But as the old and possibly apocryphal saying attributed to the late Senator Everett Dirksen goes “A billion here, a billion there, pretty soon, you're talking real money.” And don’t forget the emotional toll that being the target of a lawsuit takes on the defendent doctor while the case is meandering through the process.

A version of this post appeared on Sermo yesterday. All who commented thought the process took too long.

Wednesday, July 25, 2012

A simple mistake?

If you pay attention at all to news about medicine, you must have heard about the tragic death of a 12-year-old boy who was seen and discharged from the emergency department of the prestigious New York University Medical Center. He had developed sepsis from what appears to have been a small cut and the diagnosis was missed. When he eventually returned to the ED and was admitted, it was too late and he died. Read more details here.

Many have commented on this sad story. The good people at ProPublica, some of whom I follow on Twitter, wrote about this and other types of medical errors in a piece entitled “Why Can’t Medicine Seem to Fix Simple Mistakes?” I will grant that some of the errors they mentioned, such as wrong-site surgery and reusing syringes, are indeed simple mistakes and should be 100% preventable.

But the death of young Rory Staunton, admittedly caused by medical errors, was not the result of “a simple mistake.” The ProPublica story says, “The hospital's emergency room sent Rory Staunton home in March and then failed to notify his doctor or family of lab results showing he was suffering from a raging infection.” Specifically, he had a number (said to have been five times the normal value) of immature white blood cells called “bands,” which is a sign of the body reacting to an infection.

While human errors were made, this event also fits James Reason’s “Swiss cheese model” of a complex series of occurrences that will take more than a new policy about notifying ED MDs about abnormal lab results to fix.

Monday, July 23, 2012

Over-thinking laparoscopic surgery training

Are we over-thinking the training of residents in minimally invasive surgery? Two recent papers in prominent surgical journals suggest to me the answer is “Yes.”

In the July 2012 issue of Surgery, a paper entitled “Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills” investigated whether teaching novices to perform simulated tasks on a laparoscopic surgery training system by using the gaze strategies of experts would improve performance. The Methods section of the paper is well over 2 pages in length. The idea was to have some novices perform tasks by simply discovering how to do them by trial and error. A second cohort of novices was given a template reproducing the gaze-tracking used by expert laparoscopic surgeons.

Those using the gaze templates performed the tasks more quickly and with fewer errors. Here’s a section of text from the Results section. Note: punctuation as per the authors.

“Learning phase. Performance, completion time: Results revealed a significant main effect for block: F(9,225) = 13.97, P = < .01; but there was no significant main effect for group, F(1,25) = 0.89; P = .36; np2 = .04; and no interaction effect, F(9,225) = 0.86; P = .57; np2 = .03.”

I thought I knew something about statistics but I have no idea what the sentence [at least I think it’s a sentence] says. What I can see from the figures is that the difference in times for the tasks averaged less than 10 seconds per task and the number of errors was reduced in the gaze-trained group from 2 to 1.5 errors per task. Whether these differences are statistically significant is known only to the authors, but 10 seconds and a difference in errors of 0.5 don’t seem all that important to me.

The second paper, “Correlation of laparoscopic experience with differential functional brain activation” is from Archives of Surgery, July 2012. The link is provided so you can verify that I am not making this up. Investigators put novice and expert laparoscopists in a PET scanner and had them perform a laparoscopic simulated task [moving pegs from one place to another] while undergoing a total of 6 PET scans. The scans involved the injection of oxygen 15 labeled water. Since the full text is available on line, I invite you to view the photos of the set-up of the experiment and judge for yourself how comfortable the subjects must have been.

Rather than paraphrase the results of the study, I will quote the abstract directly.

“The novice group had a significantly (P = .001) higher activation (with deactivation in the expert group) in the left precentral gyrus and insula and the right precuneus and inferior occipital gyrus. The second analysis compared the 2 video scans and the rest scan. In contrast to the expert group, the novices had significantly (P = .001) higher activation in the right precuneus and cuneus but deactivation in the bilateral posterior cerebellum.”

It’s all very clear to me now.

Their Figure 2 shows that despite the differences noted in activation of brain areas, by the third peg transfer test, the novices equaled the scores of the experts on their first peg transfer. The novices also improved markedly compared to their first attempts.

The authors admit that one of the limitations of their study was that 4 of the 5 novices were women while all the experts were men, which may have confounded the results. [Note: the authors said this, not I.]

What does it all mean? They claim that understanding the neural pathways might help in developing better ways to train people. I think that remains to be seen.

In retrospect, I can’t imagine how I or anyone else ever learned to do laparoscopic surgery relying only on someone showing us how. I had no access to expert gaze-tracking templates or PET scans.

I guess a paper on the method “see one, do one, teach one” would not be accepted by a major journal these days.

Friday, July 20, 2012

News Item

Click here for a breaking news story on my other blog, Surgery Watch.

Why I wear a white coat


A recent article in a major newspaper asked why physicians still wear white coats. The theme echoed many recent stories of bacterial contamination of clothing and other inanimate objects. [For more on this subject, click on the "Infection" label on the right next to this post.] 

It also brought to mind a controversial rule instituted by the UK’s National Health Service in 2008 that all medical and nursing staff could not wear ties or white coats and had to have arms “bare below the elbow.”

Despite published papers reporting the existence of bacteria on white coats and ties, the UK policy was not based on any evidence linking coats, ties or long sleeves to transmission of infection to patients.

The subject has been debated for years. Yes, the white coat may be contaminated with bacteria. But whatever one wears may also be contaminated. What is the difference between wearing a white coat for few days and wearing a suit jacket or a pair of pants for a few days?

I wear a white coat for the following reasons:

  • It has a lot of pockets
  • It protects my clothes from blood, vomit, pus and poop.
  • It is easy to clean.
  • It is laundered by my hospital.
I change it at regular intervals, usually amounting to fewer than 5 days. I doubt very much that doctors who don’t wear white coats have their suits, sport coats or pants dry cleaned that frequently.

Taking advantage of the adverse publicity about ties, I have stopped wearing them because it’s more comfortable rather than for an unsupported notion of an infection risk for patients.

More importantly, I wash my hands or use a gel quite often.

Do you wear a white coat? Why or why not?

A version of this was posted on Sermo yesterday. A majority of those few who commented say they do still wear white coats.

Wednesday, July 18, 2012

“Damned if you do…”


Here’s a little story from the early days of my first job as a chairman of surgery.

Shortly after I assumed the role of surgical chairman in a community teaching hospital at the ripe old age of 40 and having absolutely no administrative experience, I visited a mentor of mine whom I had known since I was a medical student. He had been serving in a similar role at a larger hospital than mine, and I thought he might be able to share some wisdom about how to be a good chairman.

He was dispensing sound advice for most of the hour or so I spent with him. Then he said something that struck me: Sometimes the unexpected happens and there’s no simple solution. He told me that among the challenges he was facing were two lawsuits.

One was from the family of a patient who had died after a carotid endarterectomy that had been performed by a surgeon in his department. The plaintiffs were suing the hospital and my mentor, the surgical chairman, for allowing what they alleged was an incompetent surgeon to do complex vascular surgery.

The other lawsuit was by a surgeon in his department who had requested privileges to perform carotid surgery, which had been denied by my mentor on the grounds that in his opinion, the surgeon was not adequately trained in carotid surgery.

I never heard the outcome of either case, but it certainly seemed like a no-win situation.

Although that encounter occurred some 25 years ago, the problem persists today. For example, patient advocates are concerned that pain is not being adequately addressed. Yet there is an epidemic of abuse of narcotic prescription drugs that is sweeping all parts of the country.

We also are being criticized for runaway healthcare spending and being encouraged to reduce things like unnecessary testing, while a recent jury verdict for $6.4 million in Philadelphia went against two physicians for failing to order certain tests on a man who had a fatal heart attack 3 months after an emergency department visit for pneumonia.

Some say too many CT scans are being ordered for the work-up of appendicitis with worry that radiation will cause future increased cancer rates. However, in my experience, patients prefer accuracy in diagnosis over a theoretical increased risk of cancer 30 years from now.

Not long ago I was called by an emergency physician who said he had a 17-year-old boy with a textbook case of acute appendicitis. He felt a CT scan was unnecessary. I examined that patient and agreed. I explained to the boy’s mother that I was convinced he had appendicitis and needed surgery. She said, “What about a CT scan?” After a lengthy discussion, I convinced her that the CT scan was not needed. As I made the incision, I said to the OR team, “I sure hope this kid has appendicitis.”

I can think of many more such situations. How should we resolve them?
 
It seems to be the mantra for modern medicine. "Damned if you do and damned if you don't."

Monday, July 16, 2012

Things that puzzle me about surgical education


When I was a surgical residency program director, I often wondered what the establishment, you know those guys who ran surgical education, were thinking. Some may remember the rule that a resident had to see at least 50% of the patients he operated on in the clinic or the private surgeon’s office in order to claim credit for having done the case.

There was the emphasis that still exists today on making sure every resident did research. At last, some are questioning the value of this for the average clinical surgeon. Contrary to the prevailing wisdom, there is no evidence that a resident who is dragged kicking and screaming through a clinical research project or who spent a year in someone’s lab really learns anything about research or how to read and understand a research paper. Then there is the obsession with a transplant rotation, recently noted in a published paper to be a waste of time in the opinion of surgical residency program directors.

And what’s with all the emphasis on basic science? Shouldn’t the residents have learned all the basic science they need (and more) in medical school? With all that is new in clinical surgery, why are residents forced to relearn basic science that they will not ever use in practice? When you stand at the bedside of a sick patient, do you ask yourself, “How is lactic acid formed”? Or do you simply order a lactate level?

Why do we teach surgery the same way we did 40 years ago? Instead of teaching residents how to think, we still force them to memorize large volumes of information that they can carry in their smart phones.

Now I am wondering what is going on with clinical training. A recent paper found that residents are concerned that their operative skills are inadequate.

Last year in a blog reviewing that paper, I wrote, “A significant number of all residents surveyed worried that they would not feel confident to perform surgery by themselves when they finished training. A similar number were not satisfied with their operative experience.”

Many graduates of residency take fellowships to gain extra experience. Especially interesting is the proliferation of so-called “advanced” laparoscopic fellowships. There was a time when we taught residents all they needed to know in five years. Why can’t residents learn advanced laparoscopy during a five-year surgical residency? Are they too busy memorizing the Glasgow Coma Scale?

I recently heard of a new proposal. Get this; there may be a plan to offer “open surgery” fellowships. Details are sketchy, but the idea would be to train surgeons to do old-fashioned laparotomies. It’s not yet clear which of the surgical disciplines (such as vascular, colorectal, hepatobiliary) would be involved or which hospitals have enough volume of open surgery to support such a fellowship.

Maybe we should skip most of general surgery residency altogether and just let them go to their fellowships after a year or two of basic training.

Friday, July 13, 2012

The "Never Events" list should be reconsidered


You are probably familiar with the CMS “never events” initiative. CMS has decided it will not reimburse hospitals for treatment related to complications which it says should never occur. Here is the current list.
  • Foreign object retained after surgery
  • Air embolism
  • Blood incompatibility
  • Stage III and IV pressure ulcers
  • Falls and trauma
  • Manifestations of poor glycemic control
  • Catheter-associated urinary tract infection
  • Vascular catheter-associated infection
  • Surgical site infection after coronary artery bypass graft, bariatric surgery for obesity and certain orthopedic procedures
  • Deep vein thrombosis/pulmonary embolism after certain orthopedic procedures
According to American Medical News, two more complications have been proposed as additions. One is acquired conditions stemming from cardiac implantable electronic device surgeries and the other is iatrogenic pneumothorax associated with venous catheterization.

I have no problem with some of the items on the list. Foreign bodies like sponges or instruments should not be left in patients after surgery. Air embolism and blood incompatibility should be 100% preventable.

But I do not see how catheter-associated UTIs or vascular catheter associated infections can be completely prevented. Some sick patients with depressed immune systems are going to get infections.

I believe it is impossible to completely prevent wound infections in all clean cases. As has been shown in studies of SCIP compliance, some patients get wound infections after colon surgery despite the timely use of the right antibiotic.

DVT/PE cannot be prevented in every orthopedic procedure. I am unaware of any DVT study in which no patients in the experimental arm developed DVTs or PEs. Patients will develop DVT or PE even with the best evidence-based care.

With very few exceptions, every large published paper on central line insertions, even those using ultrasound guidance, reports some instances of post-procedure pneumothorax. There is no way it can be completely avoided. For example, this study of 937 ultrasound-guided central line insertions reported 2 (0.2%) post-procedure pneumothoraces. That’s a published study by radiologists. The real world incidence of pneumothorax is much higher, often quoted at 2-5%.

To me, these rulings are simply a way for CMS (and other payers who are sure to follow suit) to avoid paying. Where is the input from “organized medicine”? Was any evidence-based research looked at by those who decided all this?

Why are we standing around and allowing this to go unchallenged?

This post appeared on Sermo yesterday and most people who commented agreed with me.