Chapter 6 Application – Evidence-based practice
A major consideration when selecting interventions for CAM practice is the application of evidence-based practice (EBP). This chapter will focus on the EBP framework as a means of improving clinical practice, while the exploration of specific CAM interventions will be covered in chapter 9. This shift towards EBP will enable CAM professionals to move from a culture of delivering care based on tradition, intuition and authority to a situation in which decisions are guided and justified by the best available evidence. In spite of these advantages, many practitioners remain cautious about embracing the model. Part of this opposition is due to a misunderstanding of EBP, which this chapter aims to address.
Introduction
Over the past few decades, terms such as ‘evidence-based practice’, ‘evidence-based medicine’, ‘evidence-based nursing’ and ‘evidence-based nutrition’ have become commonplace in the international literature. The more inclusive term, ‘evidence-based practice’ (EBP), has been defined as the selection of clinical interventions for specific client problems that have been ‘(a) evaluated by well designed clinical research studies, (b) published in peer review journals, and (c) consistently found to be effective or efficacious on consensus review’.1 As will be explained throughout this chapter, EBP is not just about locating interventions that are supported by findings from randomised controlled trials; EBP is a formal problem-solving framework2 that facilitates ‘the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients’.3 Apart from the implications for clinical practice, the capacity of EBP to link study findings to a profession’s body of knowledge also indicates that EBP is a useful theoretical framework for research and may therefore provide an effective solution to the research–practice divide4–6 and an important impetus to the professionalisation of CAM.
The concept of EBP is not new. In fact, its origins may be traced back to ancient Chinese medicine.7 That said, notions about quality of evidence and best practice are relatively recent. Archibald Cochrane, a Scottish medical epidemiologist, conceived the concept of best practice in the early 1970s,8,9 but it was not until after Cochrane’s death in the late 1980s that medicine began to demonstrate an interest in the EBP paradigm with the establishment of the Cochrane collaboration.8,10 Since then, professional interest in EBP has grown.2,11–13 This shift towards EBP has enabled health professionals to move from a culture of delivering care based on tradition, intuition, authority, clinical experience and pathophysiologic rationale to a situation in which decisions are guided and justified by the best available evidence.2,5,9,12,14 EBP also limits practitioner and consumer dependence on evidence provided by privileged people, authorities and industry by bestowing clinicians with a framework to critically evaluate claims. Even so, there remains some debate over the definition of evidence in the EBP model.
Defining evidence
Evidence is a fundamental concept of the EBP paradigm, although there is little agreement between practitioners, academics and professional bodies as to the meaning of evidence. Indeed, the insufficient definition of evidence and the different methodological positions of clinicians and academics may all contribute to these discrepant viewpoints. From the broadest sense, evidence is defined as ‘any empirical observation about the apparent relation between events’.2 While this definition suggests that most forms of knowledge could be considered evidence,14 it is also asserted14 that the evidence used to guide practice should be ‘subjected to historic or scientific evaluation’. Given the long history of use of many CAM interventions, such as herbal medicine, acupuncture and yoga, this would suggest that traditional CAM evidence has a place in EBP. However, not all evidence is considered the same. These differences in the quality of information are known as the ‘hierarchy of evidence’.
As shown in Table 6.1, decisions based on findings from randomised controlled trials (RCTs) may be more sound than those guided by case series results. When findings from controlled trials are unavailable or insufficient, however, decisions should be guided by the next best available evidence. This is particularly relevant to the field of CAM, as many of the interventions used in CAM practice are supported only by lower levels of evidence, such as traditional evidence, and less so by evidence from RCTs and systematic reviews.
Level I | Systematic reviews |
Level II | Well-designed randomised controlled trials |
Level III-1 | Pseudorandomised controlled trials |
Level III-2 | Comparative studies with concurrent controls, such as cohort studies, case-control studies or interrupted time series studies |
Level III-3 | Comparative studies without concurrent controls, such as a historical control study, two or more single-arm studies or interrupted time series without a parallel control group |
Level IV | Case series with post-test or pre-test/post-test outcomes; uncontrolled open label study |
Level V | Expert opinion or panel consensus |
Level VI | Traditional evidence |
Adapted from National Health and Medical Research Council (NHMRC 1999)15 and the Centre for Evidence-based Medicine 200116
The hierarchy of evidence can also be used to identify research findings that supersede and/or invalidate previously accepted treatments and replace them with interventions that are safer, efficacious and cost-effective.2,5 Basing clinical decisions on the level of evidence is only part of the equation though, as these decisions also need to take into account the strength of the evidence (Table 6.2), specifically, the quality, quantity, consistency, clinical impact and generalisability of the research, as well as the applicability of the findings to the relevant healthcare setting (e.g. if the frequency, intensity, technique, form or dose of the intervention, or the blend of interventions, as administered under experimental conditions, is applicable to CAM practice).17,18 Of course, determining the grade of evidence may not always be straightforward, as the defining criteria of each grade may not always apply (i.e. the evidence may be generalisable, consistent and of high quality (grade A), but the techniques used are not applicable to CAM practice (grade D)). In this situation, the practitioner will need to make a decision about where the bulk of the criteria lies. For this example, it could be ranked conservatively as grade B evidence, or more favourably as grade A evidence.
Grade | Strength of evidence | Definition |
---|---|---|
A | Excellent | Evidence: multiple level I or II studies with low risk of bias Consistency: all studies are consistent Clinical impact: very large Generalisability: the client matches the population studied Applicability: findings are directly applicable to the CAM practice setting |
B | Good | Evidence: one or two level II studies with low risk of bias, or multiple level III studies with low risk of bias Consistency: most studies are consistent Clinical impact: considerable Generalisability: the client is similar to the population studied Applicability: findings are applicable to the CAM practice setting with few caveats |
C | Satisfactory | Evidence: level I or II studies with moderate risk of bias, or level III studies with low risk of bias Consistency: there is some inconsistency Clinical impact: modest Generalisability: the client is different from the population studied, but the relationship between the two is clinically sensible Applicability: findings are probably applicable to the CAM practice setting with several caveats |
D | Poor | Evidence: level IV studies, level V or VI evidence, or level I to III studies with high risk of bias Consistency: evidence is inconsistent Clinical impact: small Generalisability: the client is different from the population studied, and the relationship between the two is not clinically sensible Applicability: findings are not applicable to the CAM practice setting |
Adapted from NHMRC 200918
Another important determinant of evidence-based decision making is the direction of the evidence. This construct establishes whether the body of evidence favours the intervention (positive evidence), the placebo and/or comparative agent (negative evidence) or neither treatment (neutral evidence) (see Table 6.3). When the direction, hierarchy or level and strength of evidence are all taken into consideration, there is a more critical and judicious use of evidence. So, rather than accepting level I or level II evidence on face value alone, these elements stress the need to also identify whether the strength of the evidence is adequate (i.e. level A or B, or possibly C) and the direction of the evidence is positive (+) before integrating the evidence into CAM practice.
+ | Positive evidence – intervention is more effective than the placebo or comparative agent |
o | Neutral evidence – intervention is as effective or no different than the placebo or comparative agent |
– | Negative evidence – intervention is less effective than the placebo or comparative agent |