Some members of the medical press may be unaware of the manner in which posters are chosen for presentation. In many organizations it works like this. Abstracts are submitted to the organization for oral presentation, which is much more prestigious than simply presenting a poster. An oral presentation requires that the completed paper be submitted to one or more discussants for rigorous peer review prior to the date of the oral presentation. Papers rejected for oral presentation are often accepted as posters without any critical review at all.
For example, the Society of Critical Care Medicine [SCCM] has accepted 1025 posters for its upcoming meeting in January of 2011. The quality of some of the research is quite spotty. One abstract [title available on request] states, “While comparing pre and post [intervention] patients, survival to discharge showed a non-statistical but clinically significant improvement from 29% to 42%. (OR 1.76, 95% CI 0.5-5.9)” This of course is a scientifically inaccurate statement.
Why do organizations accept all submitted abstracts as posters? I believe it is because accepting all submitted abstracts as posters significantly increases meeting attendance. At least one author of the 1025 accepted posters will probably attend the SCCM meeting to be present when the poster is briefly discussed at sessions known as “Professor’s Walk Rounds” or similar names.
There is reward for the authors as well, who can pad their CVs with references to their research as having been “accepted as a poster presentation at SCCM.”
Bottom line. Exercise extreme caution when reporting the results of research presented in a poster.
1 comment:
As a medical journalist, I'm a big fan of poster presentations. I've blogged about this here: http://medmeeting.blogspot.com/2006/12/why-i-love-poster-sessions.html
The Skeptical Scalpel and I have been going back and forth on this on private email for a while. Here's part of what I wrote in one email on Nov. 11, 2010:
I agree with you that in some meetings the best studies are reserved for platform sessions. But in quite a few meetings the platform sessions tend to be CME, and all the new-study action is in the posters. I've probably covered 250 medical meetings in virtually every subspecialty of medicine during my career. While I haven't conducted a systematic study on the proportion of good vs lousy poster sessions, I'm certain that being selected for a platform session is no guarantee of a study's excellence, and conversely, that being selected for a poster session is no guarantee of a study's second-rate status.
Here are a few meetings that I've attended in the last few years in which the poster sessions were particularly fine:
Pediatric Academic Society
NIMH New Clinical Drug Evaluation Unit
International Gynecologic Cancer Society
Society of Critical Care Medicine
Of course the Skeptical Scalpel found a particularly egregious example of an awful poster at the Society of Critical Care Medicine, and wasted no time in pointing this out. In response, on Dec 7, 2010, I wrote:
No doubt many poster presentations are awful. So are many platform presentations, and so, for that matter, are many published, peer reviewed articles. It's the job of the experienced science journalist (or scientist) to look at those 1,025 posters and separate the wheat from the chaff. That includes separating the potentially newsworthy stories from those of little general interest as well as separating out the lousy science from the good.
I come back from a large meeting featuring thousands of presentations with about a dozen or two dozen stories that I judge are both newsworthy and scientifically worthy. That ratio allows me to be highly selective. I'm guessing that the average physician at that same meeting will return home retaining a similar proportion of studies in his/her memory.
Your howler (clinically significant but not statistically significant? It's usually the other way around!) reminds me of an incident I witnessed at one large medical meeting, which shall remain nameless. The society chose to feature one of the platform presentations at a news conference, which I attended. The investigator based her recommendation for a major change in public health policy on a result that was not quite statistically significant, but which showed "a trend to significance" (p=0.08, if I recall correctly). I asked her (and the moderator) why they chose to feature this presentation, given the lack of statistical significance. They both argued that this near-significant result was important, since given just a few more subjects in the study, they would have reached statistical significance. But of course there was an equal chance that additional subjects would have pushed the result further from statistical significance. Come to think of it, given the principle of "regression to the mean," there's probably a greater chance that more subjects would have rendered the results less statistically significant!
Bottom line: Exercise extreme caution when reporting the results of any research.
Post a Comment
Note: Only a member of this blog may post a comment.