Water on the brain?
Homeopaths don’t like the 1023 campaign.
They don’t like it so much that some homeopaths have been tweeting the “evidence” for homeopathy using the #ten23 hashtag.
Nigh on each and every one of the specific documents or claims made by the homeopaths has been addresses elsewhere but one particular theme among the homeopaths comments caught my interest.
The claim that homaopathy cannot be tested using conventional means and that some special sort of science is needed to “validate” homeopathys efficiacy. From what I can tell from the “evidence” frequently provided by homeopaths their special science amounts to sloppily designed randomised clinical trials (often only single blind) presumably so they can say “proven by RCTs” (well no not proven actually because the methodologies used are nearly always shoddy and the results are nearly never replicated in independent labs) or anecdote. The well worn phrase “the plural of anecdote is not data” applies here.
That homeopaths continually cite pseudoscience and anecdote to support their claim that ultrafilute substances actually do something other then nothing leads me towards one conclusion: Nobody has ever sat them down and explained the logic behind evidence based medicine.
Anecdotes are unreliable because people are unreliable. There are all sorts of reasons why people are unreliable judges of their own health. They don’t want to appear a burden, don’t want to admit to themselves that there is a problem – they could be afraid that a visit to the doctor will uncover something serious for instance?
When it comes to testing drugs people can be equally as bad at assessing their effects. This is down to a myriad of psychological effects – the expectation effect for one. Essentially people expect an effect from some sort of remedy and then over interpret or attribute any improvement experienced to the treatment given.
Clearly there needs to be a way of controlling for the unreliable nature of human beings…
…and there is the blind trial. To determine if a drug has a real effect patients are randomly assigned to one of two groups placebo (or control) and treatment (or experimental). The treatment group unknowingly recieved the remedy being tested and the placebo group are given a suitable alternative that it is known has no active ingredients or effect. If there is a greater effect size in the treatment group compared to the placebo group then you can be more sure that the remedy is effective…
Except, well there can be issues where conciously/unconciously the experimentors can bias the results of the study. To avoid this scenario studies can be double blinded.
This is where not only do the patients not know if they are getting a remedy or a placebo but those administering the treatment don’t know either. So there is no way they can bias the results!
Now this tells you if a remedy is better then placebo or more technically it tells you whether there is a significant difference between the two groups. To measure how much of a difference one should really employ an effect size measure of some sort. There is no point lauding a remedy based on statistical significance alone as the effect size it has over placebo could be tiny…
This is a brief explanation of double blind RCTs hopefully folks can see why they are the best way we have to assess medical treatments.
Homeopaths who wish to produce evidence that supports the efficacy of homeopathy should demand the same standards of evidence that all medical treatments are required to go through.
If they showed an effect then the skeptics would have to accept that it worked. However if they don’t the homeopaths would have to accept that it doesn’t.