Evidence-Based Medicine: Three Flaws and Three Solutions
More and more, decisions about what medicine is recommended to people with bipolar disorder comes from ranked lists that stem from what is called “evidence-based medicine”. Evidence-based medicine, or EBM, relies on aggregating data in what are called “meta-analyses” in order to determine what is the best treatment for a given condition.
In principle, this looks like a good idea. After all, proponents argue, if we aren’t basing medical decisions on evidence, then it’s just based on “hunches”. Some have argued against this distinction as a false dilemma and a simplistic epistemology. I don’t disagree.
However, that is not the approach that I intend to take in this article. Instead, I will argue that EBM and the meta-analyses on which they depend are bad science. That is, meta-analyses violate standard scientific methodology in substantial ways, each of which call into question the usefulness of the method itself.The problem for people with bipolar disorder (and for any patient, really) is that the flaws in the method make EBM incredibly easy to manipulate so as to promote one course of treatment over another, provided one has the money to do so. This is, frankly, dangerous, but without proper understanding of the problems of these methods, it is impossible to see just how they are and have been manipulated in such a manner as to provide easily malleable results.
I don’t want to just be critical, though. In each case, I will make some recommendations as to how the problem can be either corrected, or at least minimized.
Problem #1: Unpublished Studies
Last week, the Washington Post covered a story about four studies in which the drugs Abilify and Geodon failed to achieve better results against either placebo or other, less expensive medications.The problem, however, was that these studies were never published. The FDA requires that all data, whether or not the data is published, be submitted, and therefore the data is all available on their website. However, there is no requirement to publish data. If someone does a study and doesn’t like the result, they simply may choose not to publish the paper.
However, meta-analyses only collect information from published studies when comparing data. Unlike the FDA, unless data gets published, it will never find its way into the EBM canon of data.
So, what does this mean for EBM? It means that companies can simply cherry pick the studies that they like and publish them, and then the meta-analyses will simply combine that cherry-picked data and combine it into a conclusion about overall effectiveness.
This, however, is terrible science. Each individual study cannot cherry pick the data that it likes. In fact, that would be considered falsification of data. When combining studies, however, people don’t seem to recognize this as a problem. It is. Combining all the studies or even a random sampling of studies would be appropriate, but allowing anyone at all to decide what studies will be included undermines the entire methodology. It allows anyone who has done enough studies to manipulate results.
Proposed Solution: There is no good reason why meta-analyses should not include all the data provided to the FDA, rather than just those the authors have chosen to publish. The FDA insists that this data be provided for good reason. Peer review can be an important quality control. However, it does more harm than good when you allow researchers to not submit data for peer review in order to exclude it from meta-analyses. The FDA, on the other hand, should make this data more easily accessible so as to facilitate its inclusion.
Problem #2: Lack of Proper Controls
Imagine you are putting together an experiment to test the relative effectiveness of two medications against each other. On one of the medications, you do a detailed experiment, noting all the effects and side-effects. On the other medication, you do a careless experiment, poorly noting conclusions and lacking rigor. Now, what can we conclude from this study about the relative effectiveness of these two medications?If you answered, “Absolutely nothing,” you are correct. In order to properly study the relative effectiveness of two arms of a trial, both of those arms must be subjected to the same amount and type of testing. Any individual trial that didn’t do this would never see the light of day.
However, this is exactly how meta-analyses work. Their goal is to compare the relative effectiveness of certain treatments, and yet they don’t ensure that the same degree of testing has actually been done on each arm of what they purport to be comparing.
Rather, they actually create a situation that is extremely manipulable. If good evidence exists on one arm, and little on the other arm, they will recommend the side that has the better studies. However, this means that anyone willing to invest money in increasing the amount of evidence for one arm of a meta-analysis can effectively alter its conclusion. It creates the ability to buy evidence. In fact, it is this hole in the method that drug manufacturers are driving their proverbial truck through.
Proposed Solution: There aren’t many solutions here that aren’t expensive, but there are some that can at least offset some of the worst of this problem. A successful medication can earn billions of dollars. Proponents of EBM should insist that, if a company wants their medication to be tested against other treatments, they actually need to test their medications against those treatments. Too many meta-analyses end with “more research needs to be done on x.” And I’d like a pony. Insist that those designing studies for comparison actually do the comparison themselves.
Problem #3: You Can’t Get Practical Conclusions From Scientific Data, Anyway
The idea that one can come up with ranked recommendations from scientific data involves a serious misunderstanding of the limits of the scientific method. Yes, if there is exactly one positive effect of medication and there are zero or exactly one negative side effect, than you can rank them. However, especially in psychiatric medication, this almost never happens.Instead, you have drugs like Seroquel that can cause weight gain and sleepiness, drugs like lithium that potentially cause liver damage, drugs like valproic acid that can cause cognitive side effects, and SSRIs that potentially cause mania or sexual side effects. There are also different levels of efficacy for depression, hypomania and psychosis.
There is simply no scale to weigh side effects against each other, except against the kind of life that people want to live. No scale gleaned from values, however, will ever be a scientific scale. Moreover, these scales will vary from person to person. Some people might worry more about weight gain, others might worry more about cognition, while others may worry about sexual performance. A celibate university professor will have quite different goals from a shift worker trying to conceive a child.
Worse, any attempt to rank medications is going to smuggle in a system of values. And whose do you think it will be? The author’s, of course. A study in the British Medical Journal looked at 53 meta-analyses from a single issue of the Cochrane Review, the leading meta-analysis journal. They concluded that “The evidence did not fully support the conclusion” in 17% of the reviews.
This is an important result, but it actually misses the point. In 17% of the studies, the authors of the BMJ study had a different set of values in mind than the authors of the meta-analyses. This is unsurprising. I disagree with most people at least 17% of the time. We shouldn’t expect that there will be a single scale for ranking side effects, and any assumption that there could be is an assumption that people share the same goals.
Proposed Solution: While ranking medications relative to a single outcome is legitimate, no one should then try to weigh positive outcomes against side effects to generate a single ranking. Such weighing smuggles in a value system. Rather, medications and their side effects should simply be listed. This allows patients to weigh the outcomes themselves against their own goals.
Conclusion
Evidence-based medicine is based on flawed methodology. Scientific studies themselves can provide important results, but any attempt to aggregate them opens the conclusions up to publication bias, sloppy controls and smuggled-in values. This is not to say that evidence-based medicine is hopeless, and I have presented some suggestions as to how it can improve. However, so long as it has the kind of methodological flaws that it currently has, it will serve as a malleable tool by which medical practice can be manipulated.
Leave a Reply