Rory Morrison · @rorymorr

28th Jun 2013 from TwitLonger

As I don't have a blog, a wee analysis on twitlonger on @ProfGlantz comment ( on the Polosa #ecig trial in #PLOS.

1) Not having a zero-intervention control group and that it would in general 'bias the study in favor of the treatment.'

There is some truth in this (but only in the sense that it would have been useful to actually have a group that received zero intervention), however it would only bias the study in favour of e-cigs with nicotine should the no-nicotine e-cigs actually make quit outcomes *worse* than no intervention (i.e. the people in the study receiving these devices - who didn't want to quit, remember - would have been more successful at quitting without them).

Given that some 4% of the 'no nicotine' group were abstinent at the end of the study, and we'd expect quit rates in the whole 'untreated' pop to be around 3-5% (from:, this seems like a bit of a long shot. It would have been useful to know, yes, but hey, no single study can tell you everything, 'more research & comparative trials needed' etc.

2)Fiddly statistics stuff

The Yates correction Prof Glantz talks about is definitely not 'required' in this situation, and there is not a consensus among statisticians that it is an appropriate thing to do in many/most cases, as it is known to return results that are too conservative (that is, p-values that are too high, risking not detecting an effect that truly exists).

This is really quite well-established and has been for some time e.g. so it is surprising Prof Glantz is apparently bullish about mandating its use.

Running a Fisher's exact test on the same 2x2 table of data (an alternate approach which gives exact probability, instead of the approximation that is offered by chi-sq with or without Yates correction) gives a p value of 0.04971 - hurrah, still 'significant'!

But in any case this is all a little silly, as p<0.05 isn't a magic threshold above which we don't care about the results (it's just just an arbitrary convention) and making slight analytical adjustments to bump the value just over 0.05 into 'non-significance' or just downwards into 'significance' doesn't really alter how we should view the main conclusions of the study in that manner. Smaller p values provide stronger evidence against the null hypothesis of 'no difference between groups'. Large p-values show a lack of evidence of a difference between groups, given the data observed.

A fair reading would be that this study provides some evidence of an effect, and shouldn't be discounted just because we can adopt different analytical perspectives (some of which are inappropriate) and modify the resultant p-value from these tests in relatively small ways. The study has has lots of limitations, but that doesn't mean it provides no useful information and should be dismissed (which I think is the danger in following Prof Glantz's line of argument).

(PS. there are other issues such as the use of multiple testing... which I'll ignore just now for brevity and under the excuse the study explicitly says it is an exploratory proof-of-concept, so a lot of these approaches can be argued to be permissible in this context.)

Reply · Report Post