The CircFacts Blog Page
This is a blog page with the aim of exchanging ideas and views. So if you have any comments or ideas etc., please post your message below. The replies will be posted on the site as soon as possible after submission. So please go ahead and let us know what you think!
Not all studies are equal: Comments on the quality of scientific evidence.
By Stephen Moreton PhD. Posted January 2018
The peer-reviewed medical scientific literature, whilst far from perfect, is much better than the intactivists’ favorite sources of information (commonly social media, blogs, agenda-driven websites, and personal testimonies). That is not to say there are no issues with the literature. There are. Researchers are under pressure to produce publications to further their careers. And some journals are more interested in making profits than in academic standards, so their peer-review may be lax or non-existent, accepting any paper no matter how sloppy, so long as the authors pay a fee.
A detailed discussion of these issues is beyond the scope of this website, those interested may find the article “Why most published research findings are false” by Ioannidis (2005) illuminating, if a little technical, although any reading of it should be accompanied by the caution of Ingraham (2010). The title is as provocative as it is misleading, lending itself to being seized upon by quacks and charlatans to undermine scientific opposition to their nonsense. As Ingraham explains:
“What Ioannidis really says is much less ominous: he argues that it should take rather a lot of good quality and convergent scientific evidence before we can be reasonably sure of a “scientific fact,” and he presents good (scientific!) evidence that a lot of so-called conclusions are premature, not as ready for prime time as we would hope. Any supposedly large treatment effect in medical research is probably exaggerated, and “when additional trials are performed, the effect sizes become typically much smaller,” so “well-validated large effects are uncommon.” Extremely uncommon, in fact.
It’s okay for science to work like this. Ioannidis did not mean that science is broken or deeply flawed.”
And Ingraham is right. This is how science advances. Ideas are tested, verified or falsified, and gradually a growing body of evidence begins to point in one direction. Preliminary studies lead to bigger better ones. If the preliminary findings fail to be confirmed they are abandoned. With replication the findings gain credibility and, eventually, a consensus emerges.
This is well illustrated in the development of the discovery that circumcision protects against female to male HIV transmission. Initial speculative suggestions that circumcision may have a protective effect were followed by ecological studies indicating that this was so, but with anomalies and uncertainties persisting. But, by the early 2000s, the quantity of such studies had grown to the extent that the hypothesis could not be ignored, and calls were being made to consider implementation of circumcision as a preventative measure.
But still there were doubters so, to settle the matter, three randomized controlled trials were conducted. These proved the hypothesis to be correct to the satisfaction of all the professional bodies dealing with the epidemic. In short, after multiple studies, most but not all, pointing in one direction, followed by high quality evidence from three independent studies replicating each other and pointing in the same direction as the majority of previous studies, a consensus emerged: circumcision protects against female to male HIV transmission. This is how science proceeds. It does not jump to conclusions based on single preliminary studies, but on bodies of evidence, multiple studies building on, and replicating, what has gone before, until a consensus is reached, even if a few die-hard individuals remain stuck in denial. A point well-made in a blog post here: https://forthesakeofscience.com/2017/03/10/science-moves-on-bodies-of-evidence/
As alluded to at the start of this post, not all medical journals are equal. There are good ones and bad ones. Again, a detailed coverage of the issues is not the job of this website. Smith (2006) makes some pertinent points, and there has been much discussion of late about “predatory journals” that publish any nonsense, so long as they receive a fee. A rough and ready guide to a journal’s status is its impact factor (the higher the better) but it has its limitations, as many sound journals are simply not tracked, and even high ranked journals can make mistakes.
Setting aside issues of a journal’s reliability, there is, thankfully, an easy to understand if rather “rule of thumb”, guide to a paper’s quality. The hierarchy of evidence:
What the hierarchy conveniently summarizes is that not all study designs are equal. Some are inherently better quality than others, being less subject to biases and pitfalls that can affect any research endeavor. Generally speaking, studies high up the hierarchy are more reliable than ones lower down, being less subject to such things as selection bias and confounding that may skew results.
It is not infallible. One can get well designed ecological studies (relatively low down the hierarchy) that do detect genuine effects. After all, it was ecological studies that indicated there was an association between foreskins and HIV, something later confirmed by randomized controlled trials (near the top of the hierarchy) and verified by follow-up systematic reviews and meta-analyses (right at the top). And one can get badly done meta-analyses, whether through incompetence, or by the author abusing their statistical expertise to subtly manipulate the data to support a desired conclusion. An example of a bad meta-analysis is intactivist pediatrician Robert S. Van Howe’s infamous one “proving” that circumcision does not protect against HIV. It later became, literally, a textbook example of how not to do a meta-analysis. The same author has gone on to carry out several more dubious “meta-analyses”, and “meta-regression” analyses, on circumcision-related topics, all of which have been shot down by experts: Moses et al., (1999); O’Farrell & Egger, (2000); Castellsagué et al., (2007); Waskett et al., (2009); Morris et al., (2014); Morris et al., (2015); Morris et al., (2017). Beware of single-author meta-analyses by individuals with an agenda.
With these caveats in mind, should many studies on a topic exist, some pointing this way, some pointing that way, and others pointing somewhere in between, one has two choices. Firstly, one can simply look at what the majority of studies say and, secondly, one can look at what those in the top tiers of the hierarchy of quality say. Taking this approach with the effect of circumcision on sexual function and pleasure, for example, one finds that the great majority of studies find no overall effect, but with a few finding a negative effect, and a few finding a positive one (See: https://www.facebook.com/CircumcisionResource/photos/a.735986419837365.1073741827.712201812215826/913150462120959/?type=3 ). Narrowing one’s focus to only those studies in the top tiers (case-control and upwards) then every such study finds either no effect or a positive one. So, given that science moves on bodies of evidence, the conclusion is clear: circumcision has no negative effect on sexual function and pleasure.
Sometimes the only studies available are of low to middle rank. Sometimes there may only be one or two on a topic. In the absence of anything better, that is all one has to go on. But, where possible, throughout this website, preference is given to studies in the upper tiers. We pick studies on the basis of quality. Intactivists pick them on the basis of whether or not they fit their agenda.
Castellsagué, X., Albero, G., Clèries, R., Bosch, F.X. (2007) HPV and circumcision: A biased, inaccurate and misleading meta-analysis. J. Infection, 55(1), 91-3. [And author’s reply, p.93-6].
Ingraham, P. (2010) Ioannidis: Making medical science look bad since 2005. Blog post: https://www.painscience.com/articles/ioannidis.php
Ioannidis, J.P.A. (2005) Why most published research findings are false. PLoS Med., 2(8), e24. On-line: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/pdf/pmed.0020124.pdf
Morris, B.J., Hankins, C.A., Tobian, A.A.R., Krieger, J.N., Klausner, J.D. (2014) Does Male Circumcision protect against sexually transmitted infections? Arguments and meta-analyses to the contrary fail to withstand scrutiny. IRSN Urology, article 684706. On-line: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4040210/pdf/ISRN.UROLOGY2014-684706.pdf
Morris, B.J., Barboza, G., Wamai, R.G., Krieger, J.N. (2015) Circumcision is a primary preventive against HIV infection: Critique of a contrary meta-regression analysis by Van Howe. Global Public Health, ePub ahead of print. On-line abstract here: https://www.ncbi.nlm.nih.gov/pubmed/27043484
Morris, B.J., Barboza, G., Wamai, R.G., Krieger, J.N. (2017) Expertise and ideology in statistical evaluation of circumcision for protection against HIV infection. World J. AIDS, 7, 179-203. On-line: http://file.scirp.org/pdf/WJA_2017081111092660.pdf
Moses, S., Nagelkerke, N.J.D., Blanchard, J. (1999) Analysis of the scientific literature on male circumcision and risk for HIV infection. Int. J. STD & AIDS, 10(9), 626-8.
O’Farrell, N., Egger, M (2000) Circumcision in men and the prevention of HIV infection: a “meta-analysis revisited. Int. J. STD & AIDS, 11(3), 137-42. On-line abstract here: https://www.ncbi.nlm.nih.gov/pubmed/10726934
Smith, R. (2006) The trouble with medical journals. J. Roy. Soc. Med., 99, 115-9. On-line: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1383755/pdf/0115.pdf
Waskett, J.H., Morris, B.J., Weiss, H.A. (2009) Errors in meta-analysis by Van Howe. Int. J. STD & AIDS, 20(3), 216-8.