Summary and Synthesis

The examples above are by no means exhaustive and anyone who is familiar with modern medical literature will appreciate that such findings are not the exception, but rather the norm. As I mentioned earlier, we have become so inured to the vicissitudes of clinical research, so accustomed to vastly discrepant results that we hardly take note of them any more. Bizarre and peculiar outcomes have become the new normal.  The authors of one of the fistula studies (Safar/Wexner),  which found a 13.9% success rate for the fistula plug, comment that their success rate is much lower than previously reported in the literature and that “further analysis is needed to explain significant differences in outcomes”.

This will undoubtedly be the first instinct of most people encountering the findings I’ve discussed above.  They will say “Yes,…but… ” and then they will analyze the studies and identify methodological and design differences which help to explain the discrepant results.

With regard to the fistula studies, for example, they might postulate that studies with poorer results were dealing with more challenging fistulae, or perhaps the surgical techniques were different, or perhaps they had a bad batch of fistulae plugs. With regard to the overall discrepancy of outcomes (12.5%-87%) between all the studies, they will note that there was tremendous variations among the studies, that few of the studies were RCTs, few were prospective, many were just retrospective follow up.   Or perhaps the patients in some studies were not compliant with post-operative care, or patients included at the beginning of the study were not included in the end results (intention to treat). Or perhaps there were other aspects of their study which represent poor design: short duration of follow up, lack of randomization, and other places where bias may have influenced the results. There are plenty of possible explanations to help account for the dramatically divergent results. As I said before, there are lots of ways to address these variable results.

But I would suggest that we step back and ask ourselves whether there is another, more general  explanation for these sorts of bizarre, discrepant, disparate, inconsistent, anomalous results. 

Consider for a moment the believers in Geocentrism and the difficulty they had letting go of the notion that the earth was the centre of the universe. There were innumerable peculiar observations and findings, most of which were explained away by tweaking and stretching the framework of Geocentrism. And at some point they just came to accept anomalies in the perceived movements of the celestial bodies as “normal variations”.  But when they suspended their belief in geocentrism and looked at the anomalies again, they had a different understanding of what brought them about.

So, as I’ve said several times before, I am asking suggesting that we all consider for a moment the possibility that the problems I’ve illustrated here are not due to imperfect application of the EBM methodology, such as flaws in the study design or its execution.  But is it possible that there is a fundamental assumption, a confounding factor upon which EBM rests, which leads to these problems.  If so, what?

This is the topic of the next section of this website