Pages

Thursday, October 28, 2010

Medicolegal Musings: Physician Posting on Social Media & the Internet

You are in the middle of a deposition. Plaintiff’s lawyer asks, “Do you blog or tweet?” Before you answer, consider this. If you blog or tweet and respond in the affirmative, I believe anything you have ever posted would be subject to discovery by the plaintiff. Oh, you post anonymously? Would you then lie under oath and say you do not blog or tweet? For many physicians, admitting that you blog or tweet might not be a problem. But in my short career as a blogger/tweeter, I have read some things that frankly would not enhance a malpractice defense if projected on a large screen in front of a jury.

I will allow that I am skeptical and sarcastic, but I do not think I have posted anything that is derogatory to a patient, either generally or specifically. There are some very popular anonymous doctor-tweeters who post some scathingly negative comments about patients. Even if a patient could not be identified, the tone of some of these posts implies a deep-seated resentment of patients and their problems, not to mention many are vulgar, sophomoric or both. OK, some of them are funny as well, but the humor would be lost in a courtroom. Some of these tweeters disseminate prodigious numbers of posts per day perhaps suggesting that they are not always focused on their work.

I have followed several medical bloggers who post clinical anecdotes, which are essentially case reports. Despite disclaimers stating they are not about real patients, it seems obvious that they are. If the subject of one of these case report blogs decides to sue, it might be difficult to convince a jury that the blog was about a fictitious case. And this type of publication might be considered a HIPAA violation especially because it is unlikely that a blogger would have obtained institutional review board permission to publish the case report.

By the way, if you blog or tweet anonymously and answer falsely that you don’t, you better never have told anyone that you do. A lie under oath that is discovered tends to undermine your credibility quite a bit. [Defense lawyer, “Your honor, may we have a short recess while I talk to my client.”]

As far as I know from an attempt to search for medicolegal references to Twitter and blogging, this perspective has not been brought up before. What do you think?

Tuesday, October 26, 2010

“Body Size Misperception” May Be a Factor Contributing to the Obesity Epidemic

Did you ever wonder, as I often have, what obese people are thinking as they keep putting on weight? Why doesn’t it occur to them as they pass, say 250 lbs., that maybe they should stop eating so much? As published two weeks ago in Archives of Internal Medicine*, researchers in Dallas suggest that a substantial number of obese people have what they term “Body Size Misperception.” More than 2000 obese adults were shown drawings of human figures on a 9 point scale, ranging from very thin to very obese. They then were told to pick both a figure that they felt would be ideal and a figure that represented how they thought they appeared. Body size misperception existed if the subject chose an ideal body size that was the same or larger than his/her actual size.

Some 8% of the group exhibited body size misperception. In other words, these people did not recognize that they were obese. Further examples of denial were that the body size misperception cohort felt they had a low lifetime risk of heart attack, high blood pressure and diabetes. The most amazing revelation is that a full two-thirds of these already obese individuals considered themselves at low risk for developing obesity. The authors of the paper think this issue is under-publicized and generally not dealt with well by physicians.

Maybe the concept of body size misperception, an entity that I certainly was not aware of before, can explain the apparent lack of self-recognition that one might be obese. And lacking the ability to see this obviously explains not only why some people become morbidly obese but also why they don’t seem inclined to correct the situation.

*Powell TM, et al. Body size misperception: a novel determinant in the obesity epidemic. Arch Intern Med. 2010 Oct 11;170:1695-7. [No abstract available]

Friday, October 22, 2010

Hospital Ratings Revisited

A recent press release from HealthGrades claims that some 232,442 Medicare patients’ lives could have been saved over a three-year period if all hospitals performed at the level of a HealthGrades five-star hospital. While this is a laudable premise, can it be true? Let’s see.

First you need to know something about HealthGrades and its rating system. Using a large Medicare administrative database (that is, the data are submitted by hospitals for billing purposes), HealthGrades compares hospitals on an observed vs. expected outcomes basis. For some reason, hospitals are rated as five-star (best), three-star (as expected or average) or one star (poor). There is no mention of four- or two-star. And according to their methodology, “…70% to 80% of hospitals in each procedure/diagnosis were classified as three stars, with actual results not significantly different from predicted results. Approximately 10% to 15% were 1-starhospitals and 10% to 15% were 5-star hospitals.” For non-statisticians, that would be classified as a normal distribution.

Now what would happen if every hospital in the U. S. performed at the level of a five-star hospital? Well, the observed rate of complications and deaths would go down but as long as one compares observed vs. expected outcomes, the distribution of hospital ratings would still be normal with 10%-15% being above average, 70%-80% average and 10%-15% below average.

Therefore, with the possible exception of hospitals in Lake Wobegon (“Welcome to Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average.” [Garrison Keillor]), all hospitals cannot be above average.

Then there is the problem of using administrative databases to judge clinical outcomes. By this passage from HealthGrades’ own description of its methodology the following disclaimers are listed.

“Limitations of the Data Models
It must be understood that while these models may be valuable in identifying hospitals that perform better than others, one should not use this information alone to determine the quality of care provided at each hospital. The models are limited by the following factors:

“Cases may have been coded incorrectly or incompletely by the hospital.
The models can only account for risk factors that are coded into the billing data–if a particular risk factor was not coded into the billing data, such as a patient’s socioeconomic status and health behavior, then it was not accounted for with these models.
Although Health Grades, Inc. has taken steps to carefully compile these data using its methodology, no techniques are infallible, and therefore some information may be missing, outdated or incorrect.”

There are a number of peer-reviewed articles questioning the validity of using administrative databases in clinical outcomes research. A study of patients with cerebral aneurysms, from the Bloomberg School of Public Health at Johns Hopkins University, found many large discrepancies between the Maryland state administrative database and the clinical records of the patients at their institution. A paper from Harvard and Tufts concluded “Cardiac surgery report cards using administrative data are problematic compared with those derived from audited and validated clinical data, primarily because of case misclassification and non-standardized end points.” A systematic review of papers on infectious diseases found that administrative databases have “limited validity” for the evaluation of co-morbidities, a key factor in risk adjustment.

Try this for some hospitals that you might be familiar with. Compare HealthGrades ratings with “Medicare Hospital Compare,” which one must assume is using the same outcome data since HealthGrades uses Medicare’s data for its ratings. Here are the results for heart attack outcomes for three hospitals in New York City. (See Table.) The rating scales are the same, three possible grades.


I don’t know which one to believe. Do you?

Note: A previous blog post of mine pointed out a few other issues with HealthGrades that everyone should be aware of.

Wednesday, October 20, 2010

Why Reporters (And Hospital Administrators) Should Learn Statistics

Interesting article on amednews.com about the pros and cons of publically posting emergency department waiting times. The pros are that patients can self-triage to the least busy ED, and it might be good for a hospital’s business. The cons are that patients who are really sick might be discouraged from going to any ED if the waiting times are long, and ED doctors might cut corners to speed patient throughput.

One paragraph of the article caught my eye.

“Scottsdale Healthcare began posting wait times in April 2008 at its four EDs, all of which are within about 15 minutes' driving time of one another in the city (two -- a general ED and a pediatrics ED -- are housed at the same center). Its patient satisfaction scores have improved by 2 percentage points [emphasis added], said Nancy Hicks-Arsenault, RN, the organization's systems director of emergency services.”

I can’t be sure but knowing what I do about patient satisfaction scores [a good subject for future blog], I would bet that a 2% increase in patient satisfaction is not statistically significant. In my experience, fluctuations in patient satisfaction scores of 2% are common and well within one standard deviation of the average for these rather crude measures. One of the most popular patient satisfaction survey companies uses a rating scale of 1 through 5 and then converts the responses into percentages. This means that if a patient rates an ED service as a “4” instead of a “3”, that is a 20% increase in satisfaction when the patient may not really have been 20% happier with his experience. The response rate of most patient satisfaction surveys is usually below 10% which further diminishes their validity.

I would have asked to see the raw numbers, performed a statistical test and determined if a 2% increase in patient satisfaction was real or not.

Tuesday, October 19, 2010

Reporting Wrong-Site “Surgery”: Errors and Omissions

This morning, four health-reporting websites [New York Times, MedPage Today, CNN Health, Science Daily] reviewed a paper that appears in the October issue of the journal Archives of Surgery entitled “Wrong-Site and Wrong-Patient Procedures in the Universal Protocol Era.” The paper documents a number of wrong-site and wrong-procedure incidents from a medical liability insurer’s database in Colorado. The incidents were self-reported by physicians without penalty. It is an interesting study that bears reading but the full on-line version is only available by subscription. So at this time, we only have the abstract of the paper and the reports from the four news organizations to go by.

What strikes me is the manner in which the story is reported. Although the study clearly states that these adverse events were caused by surgeons and non-surgical specialists in equal numbers, three of the four websites headlined the story as follows:

“Wrong Surgery on Wrong Patient Still Happening”
“Surgical Errors Continue Despite Protocols”
“Surgery Mix-Ups Surprisingly Common”

Only one site, Science Daily, used a headline consistent with the title and content of the paper, “Study Documents Wrong-Site, Wrong-Patient Procedure Errors.” That outlet also went into some detail about the percentages of specialists report errors, mentioning that internists were responsible for 24% of the wrong-patient procedures.

A casual reader of one of these articles might assume that these incidents are happening every day. The paper recorded only the submitted events, not the denominator, which would be the number of opportunities to experience an adverse event. The use of a self-reported database is not the same as an epidemiologic study, but only two of the four reports [MedPage Today and NY Times] took the trouble to point this out. The NY Times cited a previous estimate that adverse events such as those documented in the paper occur about once in every 110,000 procedures. This is a serious topic and one which deserves the coverage it is receiving but more accurate reporting and more thoughtful analysis would inform the public better.

There are some other questions about the paper such as how many of these adverse events pre-dated the institution of the universal protocol, which calls for a “time out” and other measures to prevent such incidents. The paper covered the years 2002-2008 and the Universal Protocol was mandated by the Joint Commission in 2004.

I will review the paper in depth for you when I get the full version.

Friday, October 15, 2010

Proof That Our Country’s Education System Is in Serious Trouble


Here is an actual problem from a fourth grader’s math workbook. [See photo.] Since the photo is a little dark, I have transcribed it below.

Reasoning Hwong can fit 12 packets of coffee in a small box and 50 packets of coffee in a large box. Hwong has 10 small boxes and would like to reorganize them into large boxes. Which boxes should he use? Explain.”

Speculation has ranged from the Stonehenge and pyramids lining up with Orion to Fermat’s last theorem to just chalking it up as an inscrutable mystery of the Orient.

If you can deduce the answer, please explain it to me so I can explain it to a 10 year old.

Brain Trauma Blood Test Shows Promise But Report of Findings Is Flawed

USA Today reports that US Army doctors have discovered a blood test that can reveal whether a trauma victim has had a concussion. The test measures the level of proteins released when brain cells are damaged. If these findings are confirmed in a larger study, it would be a major advance in the treatment of traumatic brain injury [TBI]. However, the article is mostly an uncritical look at the subject.

A major question not answered is how was the blood test validated? A quote from the report “Doctors can miss these injuries because the damage does not show up on imaging scans…” is correct, but how then did they verify that a patient with a positive blood test indeed had a concussion? In medicine, before a new diagnostic test can be accepted for general use, it must be compared to a so-called “gold standard.” If the new blood test was not measured against the results of head CT scanning, then what was the gold standard used?

Only 34 subjects were included in this apparent pilot study, which has not been subjected to the peer review process. I would like to call the new blood test by the name of the protein or proteins being investigated, but the article did not provide that information.

The article referred (without a link) to a Rand Corporation study that, according to the USA Today article, stated “About 300,000 troops in Iraq and Afghanistan have suffered concussions…” I accessed that study and found that what it actually said was

“A telephone study of 1,965 previously deployed individuals sampled from 24 geographic areas [found that] 19 percent reported a probable [emphasis added] TBI during deployment...”

The author of the USA Today piece apparently then assumed that 300,000 or 19% of the 1.64 million deployed troops had in fact experienced concussions, a rather large leap of faith on three levels. The following assumptions are invalid: one, a “probable” TBI is the same as an actual concussion; two, a telephone interview is an accurate way to acquire clinical information; three, the results of a telephone sample of 1,965 people, which is 0.1% of those deployed, can be extrapolated to represent the experience of the entire population of troops.

A larger study of the unnamed protein is planned. Let’s hope it does prove to be an effective test. As the article points out, a TBI blood test would be useful in many areas such as sports, child abuse and others.

Wednesday, October 13, 2010

Resident Work Hours: The Solution

I don’t know why I didn’t think of it sooner. Or like many great ideas, why didn’t someone else come up with it? This morning at 4:30 as I lay awake having just received a consult from infernal medicine for an elderly lady being admitted with gallstones, atrial fibrillation and acute dehydration which could have waited until 7:00 a.m. today or even tomorrow, it hit me. I have the solution to the resident work hours controversy.

A few years ago, I was in the Navy and served on a ship. Crew members “stood watch” which consisted of a rotations of four hours on duty and eight hours off duty. Thus, each crew member worked eight hours per day but the work time was divided into two four hour shifts. To me this would be the perfect solution to the resident work hours dilemma.

I know, you are saying, “But Skeptical Scalpel, wouldn’t that mean six patient hand-offs per day?” Yes, of course it would. But according to the proponents of reduced work hours for residents, hand-offs are not a problem for continuity of care or patient safety. So if two or three hand-offs per day are OK, why not six?

There are a few issues that need to be worked out. For example, surgical residency training would have to be increased to 8 or 9 years duration. Operations would have to be scheduled carefully to enable a resident to participate from start to finish. All operations would have to last fewer than four hours. Each residency position currently filled by a single individual would require three people. Who is going to pay for that? Well, no one is concerned about who is going to pay for the newly adopted regulations limiting first-year trainees to 16 hour days. Then there are weekends, vacations and holidays which would mean that extra residents would be needed to cover.

Since I wrote this rather hurriedly, I may have overlooked something. I will give you 45 days to comment and then I will implement these new and improved work hours as stated.

Tuesday, October 12, 2010

Medical Student Whining and Resident Work Hours

For those of you who may not have heard, the Accreditation Council for Graduate Medical Education [ACGME] recently approved further restrictions on the number of hours that residents can work. The rules take effect in July of 2011. While many appreciate the fact that the ACGME was forced to do something to at least appear to rein in what has been portrayed as draconian working conditions for trainees lest Congress or OSHA or the ACLU enact even more onerous rules, the ACGME changes were met with mixed responses. Directors of residency training programs were most upset about the rule that restricts first-year residents to a maximum of 16 consecutive hours worked followed by a minimum of 10 hours off.

Even the mathematically challenged can see that 16 + 10 = 26, which will make scheduling interesting since last time I checked [I love that cliché], a day consists of 24 hours. The new trainees also are mandated to receive more supervision. What is not spelled out is how these new doctors are to learn to work independently the following year when they will be less supervised and stay awake for 24 hours never having done it before. As a practicing surgeon, I am here to testify that after working a full day, I am often called to see patients in the middle of the night. So far, we don’t have a mandatory 10 hours off, although it wouldn’t shock me if that is on someone’s agenda. Also, someone will have to take care of the patients when the first-year residents go home after 16 hours. Who that will be and how they will be funded is not clear.

The American Medical Student Association [AMSA] Thinks the restrictions did not go far enough. "We're going to keep pushing" for stronger limits "because it involves both patient safety and our safety and well-being," Sonia Lazreg, the group's health justice fellow [Wow!*], told The Associated Press. "The fight for safer work hours is not over."

Never mind that the jury is still out regarding the effect of the current work hours restrictions on patient safety, whether more frequent “hand-offs” of patients leads to more errors in patient care than tired doctors, what the long-term impact of these restrictions will be and many other aspects of the issue.

To the AMSA I say, stop whining about work hours. Why did you apply to medical school if you didn’t want to work hard? No one said it was going to be easy. Don’t tell me you didn’t know that doctors work long hours. This reminds me of the type of complaining that people do when they buy a house near an airport and then bitch about the noise. So AMSA members, get over yourselves. If you don’t like it, go to law school.

*(Comment by Skeptical Scalpel, who has applied for a health justice fellowship)

Thursday, October 7, 2010

The “Straw Man” Is Back

A rather breathless posting on Science Daily today extols the virtues of the “scarless” or single incision laparoscopic cholecystectomy compared to the standard four small incision technique. Single incision, or laparoendoscopic single-site surgery [abbreviated LESS (a catchy acronym is mandatory)], utilizes one incision in the navel through which the entire surgical dissection and removal of the gallbladder are done. LESS cannot usually be done when the surgery is for an acute gallbladder attack or if the patient has had previous upper abdominal surgery. The study was done at Mt. Sinai Hospital in New York.

According to the article “The Mt. Sinai group did find two advantages to the LESS procedure: these patients required less pain medicine after the operation than their counterparts who had the traditional minimally invasive operation; and LESS patients typically reported higher satisfaction scores: —4.7 on a scale of 1 to 5 (5 equals highest score) versus 3.6 for the conventional laparoscopic surgery group.”

Available in the abstract of the paper but not reported by Science Daily were the following: the study was retrospective and involved only 26 LESS patients and 50 conventional laparoscopic cholecystectomy patients; 31% of the LESS patients required additional incisions; the average age of the LESS patients was significantly younger than the conventional group [37 vs. 49 years respectively]; follow-up data were unavailable for over half of the conventional group.

The Science Daily piece quotes one of the authors. "What's really exciting is how these patients would recommend the procedure to a friend or family member," Dr. Chin said. "Seventy-four percent of the patients who had the single-incision operation would strongly recommend the procedure to someone else versus 36 percent of those who had laparoscopic surgery."

Here is where the “straw man” is introduced. A “straw man” is defined [see The Skeptic's Dictionary] as creating a fallacious argument and then refuting it with one’s own position. If you believe this article, only 36% of those who had standard four-incision laparoscopic surgery would recommend it to someone else. However, in the early days of laparoscopic cholecystectomy, papers reported patient satisfaction rates of 94-95% after conventional laparoscopic cholecystectomy.

Patients in both groups had obviously undergone only one of the two procedures making the recommendation data rather difficult to interpret. If 64% of patients who had undergone conventional laparoscopic cholecystectomy would not recommend it to someone else, what then would they recommend? Keep your gallbladder despite the pain? Old fashioned large incision open surgery? Suicide?

The straw man is an old friend. It’s good to see that he is still around.

Wednesday, October 6, 2010

Stretching Before Exercise: The Facts

Despite evidence dating back over a decade indicating that pre-exercise stretching has no value, I continue to observe joggers in my neighborhood and people in the gym going through elaborate stretching routines.

Recent systematic reviews show that stretching before exercise neither prevents soreness nor injury. Regarding soreness, a Cochrane Review looked at 10 studies in young, healthy adults and found no significant difference in muscle soreness up to three days post-exercise in those who stretched before working out and those who did not. Similarly, another Cochrane group reviewed strategies for hamstring injury prevention and noted no difference in injury rates between those who did specific hamstring strengthening exercises or stretching and those who did neither. There is also some evidence that pre-exercise stretching may result in decreased muscle strength and power.

It appears that a few minutes of warm-up focusing on the same movements that will occur during the period of exercise is sufficient. So please stop with the ritualistic stretching and get on with the exercising.

Skeptical Scalpel’s Guaranteed Weight Loss Program

Every day you must burn more calories than you eat.

Monday, October 4, 2010

Suboptimal Outcomes for Medical School Matriculants

In the annual JAMA education issue of September 15, 2010, Drs. Andriole and Jeffe address the topic “Prematriculation variables associated with suboptimal outcomes for the 1994-1999 cohort of US medical school matriculants.” The paper is a comprehensive and scientifically sound look at what factors that existed prior to medical school enrollment were associated with students who achieved less than optimal outcomes. Poor outcomes were defined as failure to pass the United States Medical Licensing Examination (USMLE) Step 1 or 2 on the first attempt and withdrawal or dismissal from medical school for academic or non-academic reasons. The study involved over 84,000 matriculants from 1994-1999 with just over 11% falling into the suboptimal outcome category.

Major variables associated with first-time failure to pass the USMLE or academic withdrawal/dismissal were low Medical College Admission Test scores, race (Asian or Pacific islander), under-represented minority or debt of more than $50,000 before entering medical school.

But the most interesting part of this paper is that 178 matriculants in the group who started medical school from 1994-1999 had to be excluded from the study because they were still in medical school. In case you don’t get it, this means they had been in medical school for at least 10 years. [Medical school usually takes four years to complete.] I was apparently prescient in my blog post [rant] on medical education of August 10, 2010 in which I marveled that I once had received an application for a residency training position from a student who had been in medical school for 10 years, and I speculated that it must be very difficult to flunk out of medical school. This was confirmed by the prematriculation variables study which states that only 1049 (1.2%) of students withdrew or were dismissed from medical school for academic reasons.

To be fair, it is possible that some of the 178 long-term medical students could be taking 10 or more years to finish for reasons other than failure to advance because of academic difficulties. I asked Dr. Dorothy Andriole, the lead author of the study, if she knew why these individuals were in school for so long. She did not have specifics but speculated that “…some students enrolled in dual advanced-degree programs (such as MD/PhD, MD/JD, etc.) may be engaged in research-related or other degree-related activities that can substantially lengthen the time from medical school matriculation to medical school graduation [and] some students, unfortunately, experience very serious, life-threatening medical illnesses personally or within their families and must take a prolonged leave of absence from medical school.”

I hope to see a follow-up article on the fate of those 178 medical students. Maybe it could focus on such issues as how was the 10 or more years of tuition funded, how did these people perform on the USMLE, what specialties did they eventually wind up in and how competent were they?

Question: What do they call the person who finishes last in his/her class in medical school?
Answer: “Doctor”