Much of medical science exists to tell us if a treatment (e.g. a drug or dietary approach) has benefits, and what the risks of this treatment might be. Often in studies, a drug being tested is pitted against a placebo (inactive) treatment. Results of these experiments help discern if the drug in question has any genuine benefit, and as long as the drug is deemed ‘safe’, the study will be generally seen evidence that the drug has merit.
However, a swallow does not always make a summer. Because the results of studies can vary according to a variety of factors including the characteristics of the study population, dosage of treatment and length of treatment, it is generally a good idea for the overall effectiveness of a drug to be assessed in the form ‘meta-analyses’. What this means, in essence, is lumping together several similar studies. Generally speaking, meta-analyses are believed to be a very good overall judge of the effects of a drug. Meta-analyses are what are known as systematic reviews (thorough discussions of the available evidence) are often used as the basis for health policy.
This all looks fine, except that any systematic review of meta-analysis can only be as good as the evidence assessed by them. And in particular, if that evidence is complete. You see, historically, the results of studies have not always found their way into the scientific literature.
Imagine you’re a drug company and you conduct a study and the results appear to make your drug look good. The chances are, you’re going to want to have the study published in a high profile medical journal and announce it with all the PR muscle you can muster. What do you do if the results are not so good though? In the past, an approach might be to simply ‘bury’ the study. This practice of publishing selectively is known as ‘publication bias’.
This week, an article in the British Medical Journal reminds us of the issue of publication bias , and the fact that “when important evidence is unavailable the conclusions reached by these research summaries may be wrong.” The article cites a couple of pieces of research which appear to show that failure to publish data or selective reporting has led to there being a skewed view of the value of anti-depressant medication. One of these pieces of research I reported on myself back in 2008 .
In this study, researchers assessed a total of 74 studies on antidepressant therapy that had been registered with the FDA (Food and Drug Administration) in the USA. Some of these studies had been published, but many had not. The researchers obtained the unpublished studies via various means including the invoking of the freedom of information act.
Analysing the 74 studies, the researchers found that:
38 had positive results, and all but one of these had been published.
36 had negative results, and 22 of these had not been published at all.
Of the 36 negative studies, 11 had been published, but in a way that conveyed a positive outcome (this is not ‘publication bias’ by the way, just plain ‘bias’).
This meant that of all the published studies, 94 per cent appeared to have positive findings.
However, FDA analysis revealed that if all trials were taken into consideration, only half were positive in actuality.
Relatively new publishing requirements in the US and Europe (which essentially means researchers need to register research prior to completion for it to be considered for publication in major medical journals) is likely doing much to reduce publication bias. However, as the BMJ piece points out, these new rules can do nothing to ensure the existing evidence base is trustworthy.
So, the BMJ has come up with a plan: Later this year, it’s planning a theme publication that will be devoted to this issue. It’s calling on researchers to submit, among other things, reviews that include known and previously unknown data. Of course, as the piece points out, the addition of previously unseen or unused data may change nothing.
However, there is the possibility that, as we’ve seen with antidepressants, seeing the totality of the evidence reveals a quite different picture from the ‘sanitised’ version vested interests would have us see. The BMJ theme issue is due in early December, and it should make for interesting reading. I think.
1. Loder E. A theme issue in 2011 on unpublished evidence. BMJ 2011; 342:d262
2. Turner EH, et al. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252-60