A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against among its fiercest academic critics, but additionally illustrates many of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted spanning a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director in the Center for Tobacco Control Research and Education at the University of California, San Francisco, in addition to a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is certainly named as first author but will not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to evaluate the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: put simply, to discover whether use of e-cigs is correlated with success in quitting, which could well mean that vaping helps you give up smoking. To achieve this they performed a meta-analysis of 20 previously published papers. Which is, they didn’t conduct any new research right on actual smokers or vapers, but rather attempted to blend the final results of existing studies to find out if they converge on a likely answer. This is a common and well-accepted method of extracting truth from statistics in many fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online as well as by the university, is that vapers are 28% more unlikely to avoid smoking than non-vapers – a conclusion which may claim that vaping is not only ineffective in smoking cessation, but usually counterproductive.
The end result has, predictably, been uproar from your supporters of Top E Cigs within the scientific and public health community, specifically in Britain. Amongst the gravest charges are those levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director in the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) in the U.S., who wrote “it is obvious that Glantz was misinterpreting the information willfully, as opposed to accidentally”.
Robert West, another British psychologist and the director of tobacco studies at a centre run by University College London, said “publication with this study represents a significant failure of the peer review system in this particular journal”. Linda Bauld, professor of health policy at the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction inside the National Addiction Centre at King’s College London, said “this review is not scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics discover in the Kalkhoran/Glantz paper? To answer a few of that question, it’s essential to go beneath the sensational 28%, and look at what was studied and exactly how.
Meta-analysis is really a seductive idea. If (say) you have 100 separate studies, every one of 1000 individuals, why not combine these to create – essentially – just one study of 100,000 people, the final results that ought to be much less vunerable to any distortions that might have crept into a person investigation?
(This might happen, as an example, by inadvertently selecting participants having a greater or lesser propensity to stop smoking as a result of some factor not considered by the researchers – a case of “selection bias”.)
Needless to say, the statistical side of any meta-analysis is rather modern-day than merely averaging the totals, but that’s the general concept. And even from that simplistic outline, it’s immediately apparent where problems can arise.
If its results are to be meaningful, the meta-analysis has to somehow take account of variations in the appearance of the individual studies (they could define “smoking cessation” differently, for example). If this ignores those variations, and tries to shoehorn all results into a model that a number of them don’t fit, it’s introducing its very own distortions.
Moreover, when the studies it’s based on are inherently flawed in any respect, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
It is a charge created by the reality Initiative, a United states anti-smoking nonprofit which normally takes an unwelcoming view of e-cigarettes, about a previous Glantz meta-analysis which will come to similar conclusions towards the Kalkhoran/Glantz study.
In a submission last year towards the United states Food and Drug Administration (FDA), answering that federal agency’s demand comments on its proposed e-cigarette regulation, the facts Initiative noted which it had reviewed many studies of e-cigs’ role in cessation and concluded that they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of those happen to be included in a meta-analysis [Glantz’s] that claims to show that smokers who use e-cigarettes are not as likely to stop smoking in comparison to those who tend not to. This meta- analysis simply lumps together the errors of inference from all of these correlations.”
Additionally, it added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of the meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and anticipate to get an apple pie.
Such doubts about meta-analyses are far away from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the facts Initiative’s points as he wrote inside the Lancet Respiratory Medicine – the same journal that published this year’s Kalkhoran/Glantz work – that the studies included in their meta-analysis were “mostly observational, often with no control group, with tobacco use status assessed in widely disparate ways” though he added that “this is not any fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis are only able to be as good as the research it aggregates, and drawing conclusions as a result is simply valid when the studies it’s based upon are constructed in similar methods to the other person – or, at the very least, if any differences are carefully compensated for. Of course, such drawbacks also apply to meta-analyses which can be favourable to e-cigarettes, such as the famous Cochrane Review from late 2014.
Other criticisms of the Kalkhoran/Glantz work go beyond the drawbacks of meta-analyses generally speaking, and focus on the specific questions caused from the San Francisco researchers and the ways they tried to answer them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the real quantity of e-cig-assisted quitters.
As CASAA’s Phillips indicates, the e-cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on their quit attempts started. Thus, the analysis by its nature excluded people who had started vaping and quickly abandoned smoking; if such people appear in large numbers, counting them would have made e-cigarettes seem a more successful route to quitting smoking.
Another question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke want to stop trying combustibles. Naturally, people who aren’t wanting to quit won’t quit, and Bernstein observed that if these folks kndnkt excluded through the data, it suggested “no effect of e-cigarettes, not that electronic cigarette users were less likely to quit”.
Excluding some who did find a way to quit – then including those who have no intention of quitting anyway – would certainly appear to affect the results of a study purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz reason that their “conclusion was insensitive to a variety of study design factors, including whether or not the research population consisted only of smokers interested in smoking cessation, or all smokers”.
But additionally there is a further slightly cloudy area which affects much science – not only meta-analyses, and not merely these particular researchers’ work – and, importantly, is frequently overlooked in media reporting, along with by institutions’ public relations departments.