Pages

Monday, February 29, 2016

The ultimate resident evaluation

It comes as no shock to me, and probably many other current and former program directors, that a recent study showed faculty overall performance evaluations of residents do not correlate with their scores on the yearly American Board of Surgery in Training Examination.

According to the JAMA Surgery paper, faculty evaluations encompassed technical skill and the six core competencies—medical knowledge, patient care, interpersonal and communication skills, professionalism, practice-based learning and improvement, and system-based practice.

The paper analyzed data for 150 residents at different levels of training over 4 years and also found that even faculty evaluations of the category medical knowledge couldn’t predict who would get a good or a bad score on the test.

It’s great to know that at the authors’ institution, the average annual evaluation scores ranged from just over 75 to 100 with means and medians both slightly above 92—like Garrison Keillor’s mythical Lake Wobegon, “where all the women are strong, all the men are good looking, and all the children are above average.”

Medical knowledge can be measured, but the other parameters are so subjective that they border on meaningless. They remind me of the infamous “smiley face” numerical pain scale that means different things to different patients.

Some examples. Earlier this year, I wrote about the difficulty defining professionalism. Using a numerical scale, how can you rate one resident as more professional than another?

And I always had trouble ranking one resident over another in system-based practice. It might be better to rate system-based practice on a binary scale; that is, can a resident define the term or not?

Big business is having trouble evaluating employees too. The evaluation process at General Electric was examined by Quartz. At GE, the annual review is not effective for managing people or improving performance. “It leads to a tendency…to focus excessively on process over outcomes” and is “an exercise in paperwork and bureaucracy instead of an agent of change.”

Note that the JAMA Surgery study accumulated 1131 evals. Even if that was only virtual paperwork, it’s much work for little value, but at least there was a lot of data to show a site visitor from the Residency Review Committee.

A New Yorker article noted that consulting firm Deloitte’s evaluation process involves consensus meetings ending with managers marking on a 5-point scale how strongly they agree with two statements: “Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus;” and “Given what I know of this person’s performance, I would always want him or her on my team.” And they must answer yes or no to two more: “This person is at risk for low performance,” and “This person is ready for promotion today.”

Maybe we should adopt a modification of Deloitte’s system for our resident evaluations. Faculty must respond yes or no to this statement: “I would let this resident operate on me.” If the answer is “no,” why should we let that resident operate on anyone?

This post originally appeared on Physician's Weekly.





4 comments:

Michel Accad said...

It is true that these rating scales border on the meaningless and provide a convenient veneer of scientific objectivity. To what extent are residency programs hostage to the ACGME accreditation process?

I also wonder how things would be if residency programs (or their affiliated faculty medical groups) actually paid resident salaries themselves. Do you think there would be more vigorous push-back against ACGME rules?

Skeptical Scalpel said...

I don't think it would make a difference. Everyone is so afraid of the ACGME and its RRCs that push-back is highly unlikely to occur.

Barry landry said...

i agree with you. I would modify your statement to " would you allow this resident to evaluate and treat you". As you know sometimes the correct decision is not to operate. Barry Landry- Private practice general surgeon for 30 yrars

Skeptical Scalpel said...

Barry, I agree. I was just trying to keep this post simple. Five years ago I wrote that even if you could teach a monkey to operate, you couldn't teach him how to decide who needs an operation and when.

See http://skepticalscalpel.blogspot.com/2011/11/i-could-teach-monkey-how-to-operate.html

Post a Comment

Note: Only a member of this blog may post a comment.