A new study of e-cigarettes’ efficacy in smoking cessation has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but also illustrates most of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted over a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director in the Center for Tobacco Control Research and Education at the University of California, San Francisco, plus a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is actually named as first author but will not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to evaluate the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: quite simply, to discover whether usage of e-cigs is correlated with success in quitting, which might well imply that vaping can help you stop trying smoking. To get this done they performed a meta-analysis of 20 previously published papers. Which is, they didn’t conduct any new research entirely on actual smokers or vapers, but instead made an effort to blend the final results of existing studies to see if they converge over a likely answer. It is a common and well-accepted approach to extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online as well as by the university, is the fact that vapers are 28% less likely to avoid smoking than non-vapers – a conclusion which will advise that vaping is not only ineffective in smoking cessation, but usually counterproductive.
The result has, predictably, been uproar from the supporters of E Cig Vapor in the scientific and public health community, particularly in Britain. One of the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and through Carl V. Phillips, scientific director of the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the United states, who wrote “it is apparent that Glantz was misinterpreting the information willfully, rather than accidentally”.
Robert West, another British psychologist and also the director of tobacco studies at a centre run by University College London, said “publication of this study represents a major failure in the peer review system in this particular journal”. Linda Bauld, professor of health policy in the University of Stirling, suggested the “conclusions are tentative and quite often incorrect”. Ann McNeill, professor of tobacco addiction inside the National Addiction Centre at King’s College London, said “this review will not be scientific” and added that “the information included about two studies that I co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics find in the Kalkhoran/Glantz paper? To reply to a number of that question, it’s required to go beneath the sensational 28%, and examine that which was studied and how.
Meta-analysis is really a seductive idea. If (say) you might have 100 separate studies, all of 1000 individuals, why not combine those to create – essentially – one particular study of 100,000 people, the final results that needs to be much less susceptible to any distortions that may have crept into a person investigation?
(This could happen, as an example, by inadvertently selecting participants having a greater or lesser propensity to quit smoking due to some factor not considered by the researchers – an instance of “selection bias”.)
Needless to say, the statistical side of the meta-analysis is pretty more sophisticated than simply averaging the totals, but that’s the typical concept. And also from that simplistic outline, it’s immediately apparent where problems can arise.
If its results have to be meaningful, the meta-analysis has to somehow take account of variations in the appearance of the person studies (they could define “smoking cessation” differently, for instance). If this ignores those variations, and attempts to shoehorn all results right into a model that many of them don’t fit, it’s introducing their own distortions.
Moreover, when the studies it’s based upon are inherently flawed by any means, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This is a charge created by the Truth Initiative, a United states anti-smoking nonprofit which normally takes an unwelcoming take a look at e-cigarettes, in regards to a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission this past year for the United states Food and Drug Administration (FDA), addressing that federal agency’s demand comments on its proposed e-cigarette regulation, the Truth Initiative noted that it had reviewed many studies of e-cigs’ role in cessation and concluded that they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of those have been included in a meta-analysis [Glantz’s] that states show that smokers who use e-cigarettes are less likely to stop smoking compared to those who tend not to. This meta- analysis simply lumps together the errors of inference from the correlations.”
In addition, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate as well as the findings of the meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and expect to have an apple pie.
Such doubts about meta-analyses are not even close to rare. Steven L. Bernstein, professor of health policy at Yale, echoed the Truth Initiative’s points as he wrote within the Lancet Respiratory Medicine – the identical journal that published this year’s Kalkhoran/Glantz work – the studies included in their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis could only be as effective as the investigation it aggregates, and drawing conclusions from this is simply valid in the event the studies it’s according to are constructed in similar ways to the other person – or, at the very least, if any differences are carefully compensated for. Of course, such drawbacks also apply to meta-analyses that are favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms of the Kalkhoran/Glantz work rise above the drawbacks of meta-analyses in general, and concentrate on the specific questions caused from the San Francisco researchers and the ways they made an effort to answer them.
One frequently-expressed concern continues to be that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the actual number of e-cig-assisted quitters.
As CASAA’s Phillips points out, the e-cigarette users within the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes if the studies on the quit attempts started. Thus, the research by its nature excluded people who had started vaping and quickly abandoned smoking; if these people appear in large numbers, counting them might have made e-cigarettes seem an infinitely more successful route to quitting smoking.
A different question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke want to quit combustibles. Naturally, those who aren’t attempting to quit won’t quit, and Bernstein observed that whenever these folks kndnkt excluded from the data, it suggested “no effect of e-cigarettes, not that electronic cigarette users were less likely to quit”.
Excluding some who did manage to quit – then including individuals who have no goal of quitting anyway – would most likely manage to affect the outcome of a study purporting to measure successful quit attempts, although Kalkhoran and Glantz reason that their “conclusion was insensitive to a wide range of study design factors, including whether the analysis population consisted only of smokers thinking about quitting smoking, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not only meta-analyses, and not simply these particular researchers’ work – and, importantly, is often overlooked in media reporting, in addition to by institutions’ pr departments.