Header 1

Our future, our universe, and other weighty topics


Friday, November 25, 2016

How Not To Do a Meta-Analysis

I have no idea whether the esoteric practice known as homeopathy has any medical effectiveness, and this will certainly not be a post intended to persuade you to use such a practice. I will be examining instead the unfairness, methodological blunders and sanctimonious hypocrisy of an official committee summoned to convince you that homeopathy is unworthy of any attention. Committees such as this are part of the reality filter of materialists, the various things they use to try to cancel out, disparage, or divert attention from the large number of human observations inconsistent with their worldview. 
 
reality filter
 
Issuing a 2015 meta-analysis report on homeopathy, the committee called itself the Homeopathy Working Committee, and was sponsored by or part of the the National Health and Medical Research Council (NHMRC), an Australian body. The committee consisted of 6 professors and one person who was identified merely as a consumer.

On page 14 of the committee's report, the committee makes this confession: “NHMRC did not consider observational studies, individual experiences and testimonials, case series and reports, or research that was not done using standard methods.” What kind of unfairness is that? Using such a rule, if a committee investigating a medical technique were to receive a million letters saying the technique produced instantaneous and permanent cures of dire maladies, the committee would just discard all such letters and not let them influence its conclusion.

The committee limited itself to scientific studies, but it did not at all simply consider all of the scientific studies on homeopathy. Instead, the committee chose to disregard the vast majority of scientific studies on homeopathy, and consider only a small subset of those studies. This is made clear by a Smithsonian article on the committee's report, which says, “After assessing more than 1,800 studies on homeopathy, Australia’s National Health and Medical Research Council was only able to find 225 that were rigorous enough to analyze.” But what was going on was actually this: the committee cherry-picked 225 studies out of more than 1800, claiming that only these should be allowed to influence its conclusions. So it based its findings on only about 12 percent of the total number of scientific studies on the topic it was examining, excluding 88% of the studies. I have never heard of any meta-analysis that excluded anything close to such a high percentage of the studies it was supposed to be analyzing.

The committee claimed to have used quality standards, standards that relatively few of the studies met. What were these standards? Below is a quote from the committee's report.

The overview considered only studies with these features: the health outcomes to be measured were defined in advance; the way to measure the effects of treatment on these outcomes was planned in advance; and the results were then measured at specified times (prospectively designed studies); and the study compared a group of people who were given homeopathic treatment with a similar group of people who were not given homeopathic treatment (controlled studies).

It is not at all true that medicine or science has these criteria as standards that are followed by all or even most studies. Control groups are when you have some people studied who are not subject to what is being tested in another group. A large fraction of all scientific studies and medical studies do not use control groups, for various reasons. Controls are often not practical to implement, too expensive to implement, or not needed because it is clear what the result will be in the case of zero influence. This scientific paper says the following about control groups:

The proportion of studies that have control groups in the ten research domains considered range from 3.3% to 42.8% ..Across domains, a mere 78 out of the 710 studies (11%) had control groups in pre-test posttest designs.

It also is extremely common for medical and scientific research to report findings that the study was not designed to look for. Saying that a study must only report what it was designed to measure is a piece of sanctimonious rubbish, rather like claiming that good students must only get ancient history answers by reading the original ancient texts, rather than looking up the answers on the Internet. Under such a rule, we would for example ignore very clear findings that homeopathy was effective in reducing arthritis pain, if the study was designed to look for whether homeopathy was effective in reducing headaches. I have never heard of any meta-analysis excluding studies based on whether they reported unexpected findings the study was not designed to look for. This seems to be an aberrant, non-standard selection rule.

So what we have here is a committee using a double standard. It has declared that scientific studies will not be considered unless some particularly fussy standard is met, a standard that a large fraction of highly-regarded scientific studies do not meet. It's like the door guard of the country club saying “Ivy league graduates only” to dark-skinned people trying to get in, even though he knows he just admitted some white people who don't even have college degrees.

The statement below from the committee's report also is a sign of double standards and cherry-picking.

For 14 health conditions (Table 1), some studies reported that homeopathy was more effective than placebo, but these studies were not reliable. They were not good quality (well designed and well done), or they had too few participants, or both. To be confident that the reported health benefits were not just due to chance or the placebo effect, they would need to be confirmed by other well-designed studies with adequate numbers of participants.

On page 35 we learn that the actual participant size requirement used by the committee was a minimum of 150 participants (studies with fewer participants were ignored). So if there had been 500 studies each showing that between 110 and 149 patients were instantly cured of terminal cancer, such studies would all have been excluded and ignored. How silly is that? For comparison, a meta-analysis on stuttering treatments excluded only studies with fewer than 3 participants; a meta-analysis on diabetes excluded only studies with fewer than 25 participants; and a cardiology meta-analysis included studies with as few as 62 participants.

I very frequently read about scientific studies which used only a small number of participants (30 or less), studies getting lots of coverage in the media, after being published in scientific journals. So evoking “too few participants” as an exclusion criteria (based on a requirement of at least 150 participants) is another example of a double standard being used by the committee. And once a committee has declared the right to ignore any study that does not meet the vague, arbitrary, subjective requirement of being “good quality (well designed and well done)," it has printed itself a permission slip to ignore any evidence it doesn't want to accept.

Below is a page from a statistician's presentation on whether or not small sample sizes should be excluded when doing a meta-analysis of medical studies. The recommendation is the opposite of what the homeopathy study committee did.

Similarly, the “Handbook of Biological Statistics” site says, “You shouldn't use sample size as a criterion for including or excluding studies,” when doing a meta-analysis.

In the case of homeopathy, it's particularly dubious to be excluding small studies with less than 150 participants. Only a small fraction of the population believes in the effectiveness of homeopathy. It is entirely possible that because of some “mind over body” effect or placebo effect, homeopathy is actually effective for those who believe in it, but ineffective for those who don't believe in it. So we are very interested in whether it is effective for small groups such as a small group that believes in homeopathy. But we cannot learn that if a committee is arbitrarily excluding all studies with fewer that 150 participants.

No doubt if we were to examine the scientific papers of the professors in the committee, we would find many that had the same issues of small participant size or no control groups or reported effects that the study was not designed to show (or we would find these professors had authored meta-analysis papers that included studies that lacked one or more of these exclusion criteria). So it is hypocrisy for such a committee to be using such things as exclusion criteria.

Apparently the committee used some type of scoring system to rate studies on homeopathy. One of the subjective criteria was “risk of bias.” We can guess how that probably worked: the work of any researcher judged to be supportive of homeopathy would be assigned a crippling "risk of bias" score making it unlikely his study would be considered by the committee. But what were the scores of the excluded studies, and what were the scores of the studies that were judged to be worthy of consideration? The committee did not tell us. It's kept everything secret. The report does not give us the names of any of the excluded studies, does not give us URL's for any of them, and does not give us the scores of any of the excluded studies (nor does it gives the names, the URLs or the scores of any of the studies that met the committee's criteria). So we have no way to check on the committee's judgments. The committee has worked in secret, so that we cannot track down specific examples of how arbitrary and subjective it has been.

There is a set of guidelines for conducting a medical meta-analysis, a set of guidelines called PRISMA that has been endorsed by 174 medical journals. One of the items of the PRISMA guidelines is #19: “Present data on risk of bias of each study and, if available, any outcome level assessment.” This standard dictates that any subjective “risk of bias” scores used to exclude studies must be made public, not kept secret. The NHMRC committee has flaunted such a guideline. The committee has also ignored item 12 on the PRISMA guidelines, which states, “Describe methods used for assessing risk of bias of individual studies.” The NHMRC committee has done nothing to describe how it assessed a risk of bias. Nowhere do the PRISMA guidelines recommend excluding studies from a meta-analysis because of small sample size or whether the reported effects match the effects the study was designed to show, two aberrant criteria used by the NHMRC committee.

It has been recommended by a professional that whenever any meta-analysis uses a scoring system to exclude scientific studies on the topic being considered, that such a meta-analysis should always give two different results, one in which the scoring system is used, and another in which all of the studies are included. That way we could do a sensitivity analysis in which we can see how much the conclusion of the meta-analysis depends on the exclusion criteria. But no such thing has been done by the committee. They have secretively kept their readers in the dark, by revealing only the results obtained given all of their dubious exclusions.

After doing all of this cherry-picking based on double standards and subjective judgments, the committee reaches the conclusion that homeopathy is not more effective than a placebo. But even if such a thing were true, would that make homeopathy worthless for everybody? Not necessarily.

Here's the story on placebos. Placebos have repeatedly been shown to be surprisingly effective for certain conditions. A hundred years ago, your doctor might have given you a placebo by just handing you a bottle of sugar pills. But nowadays you get your medicine in labeled plastic containers at the pharmacy, and people can look up on the Internet anything that is on the label. So a doctor can't just write a prescription for sugar pills without the patient being able to find out it's a placebo. But if a patient thinks some particular thing will work – homeopathy, acupuncture, holding a rabbit's foot, praying, or meditation – that might act as a placebo with powerful beneficial effects. 

We therefore cannot dismiss something as being medically ineffective by merely saying it's no better than a placebo. Imagine there's a patient who doesn't trust pills, but who tends to believe in things like homeopathy. Under some conditions and for certain types of patients, homeopathy might help, even if what's going on is purely a “mind over body” type of placebo effect, rather than anything having to do with what is inside some homeopathic treatment.

If there are “mind over body” effects by which health can be affected by whether someone believes in a treatment, such effects are extremely important both from a medical and a philosophical standpoint, since they might be an indicator that orthodox materialist assumptions about the mind are fundamentally wrong. Anyone trying to suppress evidence of such effects through slanted analysis shenanigans has committed a grave error.

Based on all the defects and problems in this committee's report, we should have no confidence in its conclusion that homeopathy is no more effective than placebos; and even if such a conclusion were true it would not show that homeopathy is medically ineffective (since placebos can have powerful medical effects). `The fact that 1800 studies have been done on homeopathy should raise our suspicions that at least some small subgroup is benefiting from the technique. It doesn't take 1800 studies to show that something is worthless – one or two will suffice.

Whether homeopathy has any medical effectiveness is an unresolved question mark, but about one thing I am certain. The committee's report is an egregious example of secretiveness, double-standards, overzealous exclusions, guidelines violations, and sanctimonious hypocrisy. Using the same type of methodological mistakes, you could probably create a meta-analysis concluding that smoking doesn't cause lung cancer; but you would mislead people if you did that. 

Postscript: Today's New York Times criticizes "the cult of randomized controlled trials" and points out the case of those who say the evidence for the effectiveness of flossing is weak, because there aren't enough randomized controlled trials showing it works. That, of course, makes no sense, as we have abundant anecdotal evidence that flossing is effective -- just as we have abundant evidence that parachutes work, despite zero randomized controlled trials showing their effectiveness. 

Postscript: A meta-analysis was recently published on the effectiveness of homeopathy in livestock. The meta-analysis avoided the outrageous exclusion problems discussed above; for example, it didn't exclude studies based on sample size. The meta-analysis concluded, "In a considerable number of studies, a significant higher efficacy was recorded for homeopathic remedies than for a control group." Specifically it concluded that "Twenty-eight trials were in favour of homeopathy, with 26 trials showing a significantly higher efficacy in comparison to a control group, whereas 22 showed no medicinal effect."  What is astonishing is that this result favoring homeopathy has been reported in The Scientist magazine with the headline, "Homeopathy does not help livestock."  That's the opposite of what the meta-analysis actually found.  

No comments:

Post a Comment