Gilomen SA, Lee CW, The Efficacy of Acupoint Stimulation in the Treatment of Psychological Distress: A Meta-Analysis. J Behav Ther Exp Psychiatry. 2015 Sep;48:140-8. doi: 10.1016/j.jbtep.2015.03.012. Epub 2015 Mar 31.
CLICK HERE to view Abstract and access to purchase Full Article found at Science Direct, Elselvier
A special note of appreciation to John Freedom , ACEP research director and David Feinstein and Dawson Church for their thoughts and inspiration.
Craig’s Notes :
This newly released meta-analysis is an important piece of research. It is a review of all the Randomized Control Trials (RCTs) performed using EFT (Emotional Freedom Techniques/TFT (Thought Field Therapy), aka tapping, up until 2013.The authors are from Murdoch University, School of Psychology and Exercise Science, in Murdoch, Western Australia, Australia. A “meta-analysis” study involves statistical analysis reviewing existing research studies and offers insights into their study design and approach and degree of overall effectiveness of an activity. So basically it is the “researching of the research already performed”.
This analysis has mixed news for EFT fans and practitioners. A bit of good news to start off, is that enough RCTs of EFT have been performed and published in peer reviewed journals that a meta-analysis was performed. The analysis reviewed 18 trials involving 921 participants. The anylysis was performed to examine the effectiveness of EFT for the treatment of psychological distress.
Unfortunate news, is that this study, though published in 2015, only included the RCTs up until 2013 and is not current, missing several important and well designed RCT studies.
The authors reported that they wished to examine what has been written in the literature what is referred to as the “large effect size” as reported by Feinstein in his article; Acupoint stimulation in treating psychological disorders: evidence of efficacy, Review of General Psychology, 16 (4) (2012), pp. 364–380 http://dx.doi.org/10.1037/a0028602. This effect size essentially refers to the strength of the effect(iveness) of something on something else, ie the effect of EFT on a particular condition.
This difference, according to Feinstein [personal communication, 6/13/15], can be explained because of the statistic used to calculate effect size. Several different ways of calculating effect size are in use. Some are better suited for one type of data than another, and some are more stringent than others (i.e., for some, the evidence must be stronger before a large effect size is indicated). Feinstein used a statistical method called “Cohen’s d,” the most frequently used method for calculating effect size. Gilomen and Lee, however, used “Hedge’s g,” which Feinstein says is more stringent than Cohen’s d, but still, he acknowledges, a reasonable choice for the data in Gilomen and Lee’s meta-analysis. This difference in the choice of statistical method, Feinstein continues, “explains why my calculations showed strong effect sizes while theirs showed only moderate effect sizes. Moderate effect sizes are still noteworthy, but it is important to understand something about the method which is chosen to analyze a piece of data. Depending on the type of analysis used, different interpretations can be drawn.
As a result, while these authors did not find a “large” effect size after their analysis, they did report the good news that:
“The findings in this study appear consistent with Feinstein’s conclusions of a positive effect for EFT, i.e. the majority of studies (12) indicate EFT is better than no treatment or waitlist controls.”
Their critique, aka the bad news, was that they found that “study design, treatment comparisons and multiple subgroup analyses (particularly the disorders subgroup analysis) raise major concerns for any conclusive support.” Their way of saying the studies performed to date are not sufficient to give a “Big High Five” for yet but at the same time is NOT saying this tapping is hooey, its just saying we need more and better and particular types of designed studies before that can happen.
What were the author’s recommendations? They were primarily three-fold:
1. “If EFT is to claim positive clinical outcomes following the tapping of acupuncture points, it is recommended that studies compare tapping with no tapping and conduct the dismantling studies called for since 2005.” They likely are referring to the need for comparing tapping on ie EFT points vs sham points and dismantling or taking apart the tapping process, separating it into its parts to see which parts or points are the aspects that are creating change, ie the tapping, vs the self acceptance statement, etc.
2. As an important arena for research in the future would be to “assess the efficacy of adding EFT to a treatment that is already evidence based and assess if this provides any benefit. For example does a tapping task add anything to prolonged exposure therapy when treating PTSD.” They are correct that there have been very little research to date comparing ie Tapping to ie acupuncture needling or tapping vs acupressure or tapping vs other psychological therapies that have a history of research demonstrating effectiveness (yes they included the one EFT vs EMDR study) and did not include newer studies emerging of ie EFT vs CBT.
3. The authors suggest that because there are such a wide and disparate number of focuses for many of the tapping studies (phobias, athletic performance, food cravings, anxiety, PTSD, etc) that instead they feel that choosing a single condition that multiple studies could be replicated and assessed with greater confidence. The difficulty with this is 1. the lack of EP research funds to account for allocating such funds to a single condition creates a pigeon-holing effect for the profession (ie EFT is just an anxiety or PTSD treatment modality) and does not allow for the expansive perspective that an innovative and relatively young therapy needs to see the expanse of possibilities of what it can do. On a personal note I have observed that in my chiropractic profession where so much research funding had to conform to validating chiropractic’s effectiveness solely for back and neck pain with significantly less funding being allocated for exploration of its contributions to health and well-being on a much broader scale.
So in conclusion, here are some perspectives:
- This research analysis brings increased awareness and credibility to EFT by both having a sufficient number of RCTs (18) to be performed and that EFT/TFT/Acupoint stimulation is being taken seriously enough by the scientific community to invest in doing a meta-analysis study.
- The tone of the study itself was perceived appeared to be a non-biased investigation without skeptical jargon or significant author bias and as John Freedom wisely submits nor a professional positive bias which is difficult to remove from EP adherents and supporters.
- The great news is that a moderate effect size was found to exist for EFT’s effectiveness for reducing psychological distress in a very well respected peer-reviewed medical journal.
- That in the tapping community, we are being called to increase standardization in the EFT research community to collaborate and collectively improve study design, analysis and create a heirarchy of the types of studies needed in order to demonstrate the foundational evidence needed.
- A challenge is that many researchers have pet “conditions” that they wish to observe the effect of Acupoint stimulation on where the broader research establishment is looking to see less exciting studies, like dismantling type studies that are less a about what tapping can help and more about what aspects of the tapping is actively contributing to statistically significant changes.
I would like to end this article with a uplifting thought and “meta-perspective” which was beautifully written by David Feinstein. It is the larger view that speaks to the courage and leap of faith that us in the tapping profession are making, drawn by our intuition and by the results we see every day. While such things as the placebo effect and practitioner bias certainly must be taken into consideration, this is still in the big picture an “innovation” and relatively young as far as therapeutic interventions go.
According to Feinstein, “When an intervention is first introduced into practice, there is no scientific evidence of its efficacy. If it is successful and begins to be used by others, social forces (such as the desire for credibility or insurance reimbursement) require that it begin to be scrutinized. The early studies are not usually as sophisticated as the studies conducted once the clinical community is taking it seriously, and an escalating set of standards can actually be applied. By the time a method has met the most rigorous of these (if any really do), it is no longer an innovative practice. It has been around a long time, has secured the attention of conventional funding sources and research institutions, and using it or not using it is a matter of personal taste, selecting from among various establishment-approved practices, hardly an innovative leap. Until then, the clinician has to choose how much evidence is personally required, and what kinds, before taking the leap.”
I hope this was helpful as I believe that you will hear portions of this study quoted and bantered about for the next year or so…
By Craig Weiner, DC