So far, no one has addressed the “science” of the paper, the full text of which is available free on line. I’ll have a go at it. Dr. Nutt and colleagues established their own rating system for harm. It has not been validated by others. It rates 16 “harm criteria” on a scale from 1 to 100. The paper’s methods section does not indicate how the “harm criteria” were chosen or how the ratings were assigned. It certainly seems rather arbitrary to say the least. The third paragraph of the description of how the scores were weighted is incomprehensible to me. Here it is in its entirety:
“During the decision conference participants assessed weights within each cluster of criteria. The criterion within a cluster judged to be associated with the largest swing weight was assigned an arbitrary score of 100. Then, each swing on the remaining criteria in the cluster was judged by the group compared with the 100 score, in terms of a ratio. For example, in the cluster of four criteria under the category physical harm to users, the swing weight for drug-related mortality was judged to be the largest difference of the four, so it was given a weight of 100. The group judged the next largest swing in harm to be in drug-specific mortality, which was 80% as great as for drug-related mortality, so it was given a weight of 80. Thus, the computer multiplied the scores for all the drugs on the drug-related mortality scale by 0.8, with the result that the weighted harm of heroin on this scale became 80 as compared with heroin's score of 100 on drug-specific mortality. Next, the 100-weighted swings in each cluster were compared with each other, with the most harmful drug on the most harmful criterion to users compared with the most harmful drug on the most harmful criterion to others. The result of assessing these weights was that the units of harm on all scales were equated. A final normalisation [sic] preserved the ratios of all weights, but ensured that the weights on the criteria summed to 1.0. The weighting process enabled harm scores to be combined within any grouping simply by adding their weighted scores. Dodgson and colleagues provide further guidance on swing weighting. Scores and weights were input to the Hiview computer program, which calculated the weighted scores, provided displays of the results, and enabled sensitivity analyses to be done.”
The figures are equally bizarre especially figure 3. The paper does not allow for variations in amount of drug used. For example, is one alcoholic drink per day a problem?
Certainly alcohol abuse can be a quite harmful but what about the recent research showing that 1 to 2 drinks per day improves health? The way this story was reported might actually be detrimental by causing social drinkers to stop thereby possibly harming themselves.
Honestly, I am surprised that The Lancet would publish a paper such as this. It’s not April 1, so I don’t think it’s a joke. Did they just want to be controversial or get publicity? If so, I guess it worked. And what about the media? Should they just report these press releases without question? Widespread uncritical dissemination of junk science like this gives it unwarranted credibility.