Pages

Tuesday, June 27, 2017

How to fix the problem of general surgery resident attrition

Over the last 25 years, about 20% of general surgery residents have failed to complete their five years of training. This compares unfavorably to other specialties such as orthopedics, obstetrics-gynecology, and medicine with attrition rates of < 1%, 4.5%, and 5%, respectively.

A paper presented at the American Surgical Association in April looked at the factors associated with attrition in one year’s resident class. In 2007, 1047 residents began their training and after 8 years of follow-up, 80% had become surgeons. How many non-finishers left programs by their own choice is not clear.

Some highlights of the research are as follows:

24% of women and 17% of men left general surgery training.

Size mattered because 23% of men and 25% of women left large programs compared to both sexes leaving smaller programs at a rate of just 11%.

Tuesday, June 20, 2017

Some general surgery residency graduates may not be competent to operate

A new study says 84% of general surgery residents in their last six months of training were rated as competent to perform the five most common general surgery core procedures—appendectomy, cholecystectomy, ventral hernia repair, groin hernia repair, and partial colectomy. However the percentage of those judged competent varied from a high of 96% for appendectomy to a low of 71% for partial colectomy.

When analyzing the other 127 core operations of general surgery, the investigators found that 26% of residents in their last six months of training were felt to not be competent to perform at least some of those procedures.

The study was presented at the annual meeting of the American Surgical Association in April 2017 and reported in ACS Surgery News.

Data were compiled from ratings of 522 residents by 437 faculty yielding 8526 different observations.

For all of the procedures rated, maximum resident autonomy was observed for 33% of cases, and the more complex the case, the less ready the residents were to perform it on their own.

The lead author of the study, Dr. Brian George of the University of Michigan, was asked whether the duration of surgery training should be increased. He answered, “The 20,000 hours of surgical residency should be enough to train a general surgeon to competence—it's up to us to figure out how.”

Thursday, June 15, 2017

Surgical residents have lots of problems, need more time off

A recent survey of surgical residents regarding their personal and professional well-being revealed that while most of them enjoyed going to work, they had many serious issues.

All 19 surgical residency programs in the New England region were invited to participate, and 10 did so. Of 363 trainees contacted, 166 (44.9%) responded to the survey with 54% of respondents saying they lacked time for basic health maintenance. For example, 56% did not have a primary care physician and were "not up to date with routine age-appropriate health maintenance such as a general physical examination, laboratory work, or a gynecologic examination."

I am not surprised that young men and women averaging 30 years of age or less have no primary care physician? I wonder what percentage of young people who are not surgical residents have one.

Should asymptomatic people in this age group or anyone in any age group have a general physical examination and lab work?

Thursday, June 8, 2017

More on artificial intelligence in medicine and surgery

Part 1

A survey published in the journal arXiv predicted with a 50% probability that high-level machine intelligence would equal human performance as a surgeon in approximately 35 years. See graph below. 
Click on the figure to enlarge it
We have already seen a machine beat the world’s best Go player. Although Go is a complicated game, it lends itself to mathematical analysis unlike what one might experience when doing a pancreatic resection.

A potential flaw in this study is that the surveyed individuals were all artificial intelligence researchers who predicted that machines would not be their equal for over 85 more years with the 75% likelihood of this occurring being over 200 years from now.

I suspect if surgeons were asked the same questions, we would say it would take over 85 years for machines to be able to operate as well as we can and 35 years until artificial intelligence researchers would be replaced by their creations.

[Thanks to @EricTopol for tweeting a link to the arXiv paper.]

Part 2

Similar to the question “who is responsible if a driverless car causes an accident?” is “when artificial intelligence botches your medical diagnosis, who’s to blame?” An article on Quartz discussed the topic.

[Digression: The article matter-of-factly states “Medical error is currently the third leading cause of death in the US… ” This is untrue. See this post of mine and this one from the rapid response pages of the BMJ.]

If artificial intelligence was simply being used as a tool by human physician, the doctor would be on the hook. However indications are that artificial intelligence may be more accurate than humans in diagnosing diseases and soon may be able to function independently.

If a machine makes a diagnostic error, are the designers of the software responsible? Is it the company that made the device? What about the entity owns the system? No one knows.

The Quartz piece did not address this. Who is responsible if a nonhuman surgeon makes a mistake during an operation?

I’m sorry I won’t be around 35 years to hear how this is settled.

Tuesday, June 6, 2017

Radiologists have an identity crisis

Here's a question that has been debated for several years: Should radiologists talk to patients about their imaging results? Citing several issues, I came down solidly on the "No" side in a 2014 blog post which you can read here.

Two major radiology organizations have committees looking into the concept, and a New York Times article said, "they hope to make their case [for it] by demonstrating how some radiologists have successfully managed to communicate with patients and by letting radiologists know this is something patients want."

However, a recent paper presented at the annual meeting of the American College of Radiology raised a new issue.

Apparently patients need more basic information before talking to radiologists—namely what exactly is a radiologist and what does a radiologist do?

A group from the University of Virginia surveyed patients waiting to have radiologic studies performed and came up with some remarkable results. Of 477 patients surveyed, only 175 (36.7%) knew that a radiologist is a doctor, and 248 (52%) knew that radiologists interpret images.

Based on those findings, the investigators developed an educational program of PowerPoint slides which was shown to a new series of 333 patients in the waiting room. When surveyed after viewing it, 156 patients (47.7%) said they were aware that a radiologist is a doctor, and 206 (62.2%) knew that radiologists interpret images.

Both responses were significantly better after the educational presentation, but still, less than 50% of patients identified radiologists as doctors. Maybe the problem was the PowerPoint. Maybe radiologists need to wear scrubs or drape stethoscopes around their necks.

This is only a small study from one institution. Nevertheless before taking the big step of talking with patients, it suggests radiologists need to do a better job of explaining who they are and what they do.

We surgeons think we have an image problem when people say to us, "Oh, are you just a general surgeon?" They don’t know what we do, but at least they know we are physicians.

Thursday, June 1, 2017

The opioid epidemic: What was the Joint Commission's role?

Last year the Joint Commission issued a statement written by its Executive VP for Healthcare Quality Evaluation, Dr. David W. Baker, explaining why it was not to blame for the opioid epidemic. If you haven’t already read it, you should. Here is the first paragraph of that document:

“In the environment of today’s prescription opioid epidemic, everyone is looking for someone to blame. Often, The Joint Commission’s pain standards take that blame. We are encouraging our critics to look at our exact standards, along with the historical context of our standards, to fully understand what our accredited organizations are required to do with regard to pain.”

With the help of an anonymous colleague, I looked at some of the historical context.

In December 2001, the Joint Commission and the National Pharmaceutical Council (founded in 1953 and supported by the nation’s major research-based biopharmaceutical companies) combined to issue a 101-page monograph entitled “Pain: Current understanding of assessment, management, and treatments.”

Here in italics are some excerpts from it. My emphasis is added in bold.