Health services research as a form of evidence and CAM

Published on 22/06/2015 by admin

Filed under Complementary Medicine

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1836 times

7. Health services research as a form of evidence and CAM
Ian D. Coulter and Raheleh Khorsan

Chapter contents

Introduction135
What is health services research?136
What kind of evidence does HSR collect and what kind of methods does it use?137
The marriage of CAM and HSR138
Whole-systems research139
Is HSR whole-systems research?141
Case studies and programme evaluation141
Case studies141
Programme evaluation142
Structural evaluation143
Process evaluation143
Outcome evaluation143
Conclusion144

Introduction

Health services research (HSR), an Association for Health Services Research (AHSR) lobbyist once said, was as difficult to sell as a dead fish wrapped in newspaper (Gray et al. 2003, pp.W3–287).
HSR methods may be used to improve clinical, patient-centred, and economic outcomes across both allopathic and complementary and alternative medicine (CAM) systems of care…HSR has much to contribute to CAM, and conventional HSR has much to discover from addressing the broader range of issues required by CAM (Herman et al. 2006).
These two statements represent the range of opinions concerning the use of HSR and CAM. Perhaps the truth lies somewhere between the two. As stated by Coulter & Khorsan (2008), ‘So it would seem that HSR is neither a panacea nor the Holy Grail. It clearly has an important contribution to make, but as with all research paradigms, it addresses only one way of knowing. It may be a truth but not the only truth and certainly not the whole truth. To the CAM community “proceed with caution” might be the appropriate guideline.’
In this chapter we will explore the nature of HSR and outline what it could contribute to the researching of CAM.

What is health services research?

In a previous paper Coulter described HSR. He states:
WITHOUT an HSR component, the move towards evidence-based dentistry will remain more a promise than a reality. The major concerns of HSR – such as linking structure, process, and outcome; measuring quality of care; evaluating access, cost, services, and utilization of care; measuring health care need and health risks; accessing patient measures such as satisfaction and health-related quality of life; and appropriateness research – are all crucially important to evidence-based dentistry(Coulter 2001 pp. 720–721).
HSR was defined by the Institute of Medicine, in a major report in 1979, as the investigation of the relationship between social structure, process and outcomes for personal health services. The last involves a transaction between a client and a provider to promote health. Andersen et al. (1994) state that this definition requires that HSR includes structure and/or process. The structural component includes personnel, facilities, services available, organizational features and financing. Process is the transaction that occurs between the provider and the patient. Under this definition of HRS the focus goes beyond the disease and interventions of clinical studies to include the total organization of the care delivery.
HSR involves four levels: (1) the clinical level; (2) the institutional level; (3) the systemic level; and (4) the contextual level (Andersen et al. 1994). The structure and process across organization types can affect effectiveness and clinical outcomes. At the systemic level the way in which health care is organized (e.g. a nationally funded and organized health care system) clearly has an impact on the patient–provider transaction. At the contextual level other policies (i.e. welfare policy) also have an impact.
A key structural component of HSR is finances. While HSR has made numerous contributions to understanding health care, its focus on outcomes and linking these to structure and process makes it a core consideration in any discussion of health services. Health policy should involve adopting the most efficacious, effective therapies with the best outcomes within real practices and within the resources available. Cost-effectiveness and cost–benefit analyses are an essential part of that determination (Clancy and Kamerow, 1996 and Kay and Blinkhorn, 1996). The ultimate objective of HSR is to improve the quality of care (Brown et al. 2000).
Steinwachs & Hughes (2008) note that it is largely HSR that is drawn upon by decision-makers and informs policy decisions and tends to be the primary source for information on how well health systems, at least in the USA, are functioning. This fact in itself should make it of prime interest to the CAM community.
The field was defined by the Academy of Health as follows:
HSR is the multidisciplinary field of scientific investigation that studies how social factors, financing systems, organizational structures and processes, health technologies, and personal behaviors affect access to health care, quality and cost of health care, and ultimately our health and well-being. Its research domains are individuals, families, organizations, institutions, communities, and populations(Lohr & Steinwachs 2002).
Clearly then health services research covers a huge swathe of health concerns. In fact the challenge might be to identify what is not included. Herman et al. (2006) summarize HSR research by stating that it’s the ‘study of the effect of various components of healthcare system (e.g. social factors, financing systems, organizational structures and process, delivery of care, health technologies, personal behaviors) on healthcare outcomes (e.g. access, quality, cost, patient health and well-being)’(p. 79).
They also note that only three areas are excluded: (1) demonstration projects; (2) studies of efficacy done in laboratories or on animals; and (3) randomized controlled trials (RCTs) using strict protocols and defined patient groups. ‘In short, HSR is based on the assumption that efficacious treatments exist. It then evaluates the various components of treatment delivery (e.g. policy, structures, processes) with respect to outcomes to make healthcare more efficient, effective, and cost-effective’ (Herman et al. 2006, p. 79).
It is clear therefore why the CAM community might be interested in HSR. Firstly, its major concerns do dovetail very well with the types of concerns the CAM community is worried about, including access, utilization, funding, cost and outcomes. Secondly, because it is the field that is used by policy-makers it does have a significant role in shaping both the health care debates and health care delivery. Few other fields have as much potential to have an impact on the system organization and delivery of health as opposed to, say, impacting on specific therapy and treatment. Thirdly, but by no means least, it employs research methodologies that seem more appropriate to assessing CAM than the traditional RCTs so prevalent in biomedicine. There is therefore much for CAM to relate to in HSR.
Unfortunately, until quite recently, CAM has not very frequently been the subject in HSR. One feature that HSR does share with CAM is that, relatively speaking, in the USA it receives one of the lowest amounts of funding from the National Institutes of Health (NIH) (in 2005 about 5% of the budget) (Herman et al. 2006). Of course that greatly exceeds the budget for the National Center for Complementary and Alternative Medicine (NCCAM), which was 0.42% of the total NIH budget in 2006, which might help explain why investigating CAM has not constituted a large part of HSR (Coulter 2007).

What kind of evidence does HSR collect and what kind of methods does it use?

The most dominant feature of HSR is that it is very multidisciplinary. It is done by statisticians, epidemiologists, sociologists, psychologists, anthropologists, economists, behavioural scientists, management/organizational studies, medicine, nursing, dentists, chiropractors, acupuncturists, and other health professions. Because of this it uses multimethods. The hierarchy of evidence that characterizes systematic reviews and meta-analysis does not make sense in this area (Coulter 2006).
While health planners may give more weight to economic factors such as cost, these may not be the most significant variables determining the outcomes of the health care. The relevance of any form of evidence in HSR will be dictated by both the purpose for which the information is being used and the context in which it is gathered. In the house of evidence, as outlined by Jonas (2005), HSR falls into the category he labels ‘use testing’. This is in contrast to what he terms ‘effects testing’. As Jonas notes, HSR research can provide information about the relevance and utility of practices whether they are proven or unproven in terms of efficacy. HSR represents a pluralistic approach to evidence.
There is no single methodology or research design for conducting HSR. This is a point also made by Walach et al. with regard to CAM: ‘More specifically we will argue that there is no such thing as an inherently ideal methodology. There are different methods to answer different questions’ (Walach et al. 2006 p. 2).
They suggest that instead of a hierarchical model of evidence the more appropriate model is a circular one. Under this approach there are a multiple of optimal methods and the most powerful method might be triangulation. This occurs when two distinct methodological and independent approaches are used to investigate the same phenomenon. So, for example, RCTs may need to be supplemented by long-term observational studies to see if therapies have the same effect in clinical practice that they have in controlled trials.
Steinwachs & Hughes (2008) note that the report Crossing the Quality Chasm: A New Health Care Systems for the 21st Century (Committee on the Quality of Care in America 2001) identified six critical elements: (1) patient safety; (2) effectiveness; (3) timeliness; (4) patient-centred care; (5) efficiency; and (6) equity. It is HSR that provides the measurement tools for evaluating these goals.

The marriage of CAM and HSR

In two recent articles by Herman et al., 2006 and Coulter and Khorsan, 2008, the merits of the match between CAM and HSR have been examined. In their paper Herman et al. (2006) identify 355 studies in the field of HSR and CAM therapies. But this represented only 2% of studies identified in the search (up to 2005) as HSR. Of those with abstracts that clearly identified the nature of the study, the bulk was surveys of CAM users which often included their reasons for using CAM. The next most frequent were descriptive surveys of providers ‘to obtain their characteristics, the characteristics of their patients, and the specific therapies they prescribe’ (p. 80). There was one study looking at the economic impacts of CAM, eight on research needs and five on research methods. Their paper therefore is more focused on what HSR research can bring to CAM (and the reverse). They suggest that studies of integrative medicine (IM), health insurance coverage, effectiveness, cost-effectiveness, practice guidelines and whole-systems research are all areas of potential work for HSR in CAM. Because of the way the literature search was conducted (and that only those studies with abstracts were reviewed), the number of studies probably represents a very incomplete list. In two earlier papers focusing just on chiropractic, the number of studies listed in HSR and social sciences was 105 (Mootz et al. 1997) and 81, respectively (Mootz et al. 2006).
Coulter & Khorsan (2008) for their part make the case that one of the significant contributions of HSR to CAM is in the area of descriptive studies. Despite the fact we now have numerous studies on the utilization of CAM we still have very little good empirical data on what is done in CAM practices. From the studies we can tell you the percentage of the population using various CAM professionals but not what they are being treated for, what they are being treated with, what it costs and what the outcomes are. As they note, until we know more about the practice, the scope of practice, patient characteristics, utilization rates, patient numbers, patient health problems, therapies being used, cost and funding, it is difficult to design appropriate studies, including whole-systems research. ‘The studies on epidemiology, insurance, and cost effectiveness can all contribute to our understanding of CAM’ (Coulter & Khorsan 2008 p. 40). In the case of chiropractic there is now a large body of descriptive studies as well as other HSR studies. The latter has included studies on workman’s compensation, comparisons of chiropractic and medical care, evaluation by patients, the testing of various hypotheses about chiropractic utilization using empirical data, studies of the efficacy of chiropractic in clinical trials, meta-analysis of studies on manipulation, field studies on the appropriateness of chiropractic manipulation and the economic cost of chiropractic. Therefore, there exists for chiropractic ‘an extensive body of data that describes the practice, the patients, and the providers of chiropractic’ (Coulter & Khorsan 2008 p. 42).
Because HSR focuses on existing practices and programmes in the real world as opposed to the artificial world created under RCTs, it speaks to a major concern of CAM – effectiveness. As Coulter & Khorsan note, ‘in this way HSR introduces a badly needed dose of realism into the evidence-based practice movement’ (Coulter & Khorsan 2008 p. 41). Steinwachs & Hughes (2008) observe that effectiveness research is undertaken in community settings and with patients who are not subjected to inclusion or exclusion criteria and who can be given multiple interventions.
Steinwachs & Hughes (2008) identify key areas where they feel HSR can make major contributions to CAM: studies evaluating the quality of health care; studies of the structure of health care; studies of the process of health care; studies of the outcomes of care; and public health studies focusing on preventive health services.
Coulter & Khorsan (2008) add to this list two other major areas: studies on the health-related quality of life and studies on the appropriateness of care.
However we wish to focus on its potential in another area – whole-systems research and programme evaluation.

Whole-systems research

Numerous commentators have noted that, in studying CAM, and more recently IM, we need to move away from the reductionist model used in RCT to study the whole system. The favoured theoretical models for doing this have been systems theory (Beckman et al., 1996, Bell et al., 2002 and Verhoef et al., 2005) and/or complexity theory (Kernick 2006).
Originally systems theory grew out of work in biology but was later applied in the field of cybernetics, information theory and computers. Beckman et al. (1996) identify the following features of systems theory. First, it posits a multilevelled structure in which the whole cannot be reduced to its parts or the sum of its parts. A change in any subpart has an impact on all the other parts. Second, it posits an ecological view of systems in which a system interacts constantly with its environment and where the results are processes rather than final structures so that health becomes a process, not an endproduct. Third, it posits non-linear causality. Fourth, it sees systems as self-organizing and with emergent properties which cannot be found in the constituent parts. Fifth, the systems are therefore self-transcendent, meaning they can transcend any one state and create new structures and processes. Sixth, the mind represents the dynamics of self-organization and is characteristic not only of individuals but also of social, cultural and ecological systems.
For Bell et al., systems theory provides a ‘rational conceptual framework within which to evaluate CAM systems, integrative medicine’ (Bell et al. 2002 p. 13). As they note, the classic view of health care looks at structure, process and outcome. But all of these present a challenge to CAM and IM. If we acknowledge that CAM and IM represent multiple systems we need scientific methods that can assess ‘multi-causal illnesses, multiple interventions, and multi-dimensional outcomes (bio-psycho-social)’ (Bell et al. 2002 p. 137). As they further note, there are new analytical and statistical methods for doing this kind of assessment.
Verhoef et al. (2005) make a similar case for whole-systems research and CAM. They further note that in whole-systems research both the patient–practitioner relationship and the therapeutic environment would not be ignored (what the present authors would term the health encounter). Furthermore systems research would include what they term ‘model validity’; that is, it would research the unique healing theory and the therapeutic context (Jonas & Linde 2002). They note, as does Bell and her colleagues (2002), that this approach will require both quantitative and qualitative methods, especially the use of observational methods.
Kernick (2006) feels that one of the reasons HSR has not been more influential (and therefore its results less implemented) is because its research models have been too simplistic to reflect the real health care environment. The assumptions underlying the major research model (the RCT) have been linearity in causal effects, reductionism, determinism, impartiality of the observer, and that the natural state of systems is equilibrium. In contrast he sees that in complexity theory (1) there are complex systems that consist of a large number of interacting elements; (2) there are reiterative loop backs and that these non-linear instabilities lead to innovation and unpredictable behaviour; (3) small changes on one area can cause large changes across the whole system; (4) the system is different than the sum of its parts; (5) the behaviour of complex systems can result from emergent properties; (6) systems may operate away from equilibrium, there may be multiple equilibria and equilibrium states are invariably suboptimal; (7) complex systems do not have clear boundaries; and (8) history is important in complex systems and the past will influence the present. Given these features of systems, breaking them down into component parts may destroy the very thing you are trying to understand. Patterns of order evolve and self-organize: the important focus is on the interactions between the elements and not the elements themselves.

Is HSR whole-systems research?

The answer to this question is: yes and no. Coulter & Khorsan (2008) in their paper looked at the view one can get from HSR of chiropractic and from the social sciences such as anthropology and sociology. Their conclusion is that HSR research provides a distinctly different picture of chiropractic. In that picture chiropractic looks like a neuromusculoskeletal specialty whose focus is overwhelmingly on the spine and the neuromuscolskeletal system of the body. Chiropractic appears narrow in the scope of both its therapies (dominantly manipulation) and in terms of the health problems it treats (back and back-related problems). However in the case of chiropractic there is a fairly extensive body of ethnographic observation studies by social scientists (Coulter 2004). This allows us then to make a comparison of the view from both.
In the social science literature chiropractic appears as a holistic practice with a broad focus on wellness. While manipulation may be the major therapy it is given within a framework of a very broad philosophical paradigm characterized by vitalism. Although the patients may present a very narrow range of initial health problems, the care is expanded to include posture, stress, exercise, weight, diet, nutrition and lifestyle counselling. As noted by Coulter (2004), if you only looked at the quantitative data from HSR on chiropractic (and, as noted earlier, there is an extensive body of that) you miss all the information that you get from qualitative observation and, more importantly, miss understanding what the chiropractic health encounter is about. If we pose the question: what elements contribute to the effectiveness of chiropractic? you would not be able to answer that question just using the current HSR data.
But is this the result of abstinence (the absence of action) or impotence (the inability to perform the action)? While the outcome of these two may be the same the cause is quite different. We would like to suggest that it is the result of abstinence. We would further suggest that it is in the area of programme evaluation that we can find a solution to both this issue and the issue of whole-system research because the failure with regard to chiropractic is a failure to conduct whole-systems research.

Case studies and programme evaluation

For the most part programme evaluation is about effectiveness, not efficacy, and about total programmes, not simple therapeutic interventions. But because they are about programmes they are often combined with a case study methodology (Yin 1994) and use both qualitative and quantitative methods to examine how the programmes function (Jinnett et al. 2002).

Case studies

Case study methods are particularly appropriate for studying new and emergent programmes and have been used successfully to evaluate programmes in medical centers (Yin and Heald, 1975 and Patton, 1990). Case studies are particularly strong at discovering the key factors that facilitate and inhibit desired outcomes and understanding the process and mechanisms through which these factors interact. Case studies are also one of the few techniques that provide indepth information about how programs are working (or not working) within the larger social and organizational contexts in which they are embedded (Coulter et al. 2007).
Case study methodology relies on a two-step sampling procedure. Investigators first decide what case (or cases) they will examine (case-based sampling), and then decide what kinds of data will be collected from each case (within-case sampling). The power of case studies does not depend on the number of cases but instead comes from the range and diversity of the within-case sample of people and data collection techniques.
By using stakeholder analysis, the cases will yield a rich description of how the programmes grapple with the challenge of providing CAM or for patients within various settings (e.g. hospitals). An example of this type of study is the study of a hospital-based IM program by Coulter et al. (2007).
Such studies can collect quantitative data about the programme’s organization, costs and patient loads to understand how such a programme fits into a larger care network. Coulter et al. (2007) used stakeholder analysis to describe beliefs, behaviours and vested interests of at least five groups: (1) key administrators; (2) providers of care in the programmes; (3) clinicians within the larger medical centre who may or may not have referred patients; (4) CAM providers, when they were used; and (5) patients.
The aim of the qualitative component is to elucidate issues that cannot be answered by the quantitative analyses and to explore additional areas that are difficult to address in a quantitative work (Van Maanen, 1979 and Miles and Huberman, 1994).

Programme evaluation

Programme evaluation traditionally involves three levels of evaluation: (1) structural evaluation; (2) process evaluation; and (3) outcome evaluation. The point of such evaluations is to determine the merit, worth and significance of the programme and hopefully assist those who may wish to expand, change or replicate it in other facilities.
An evaluation strategy will usually include contextual (Israel et al. 1995), formative process (Scheirer, 1994 and Brindis et al., 1998) and summative elements (Rossi et al. 1999).
Contextual evaluation is used to assess and compare the environments and the population characteristics of the programme. An evaluation here will focus on the influence of these factors on the intervention structures, processes and outcomes.
In the initial phases of a programme, formative evaluation can be used to collect data on intervention structures and processes. Process evaluation is used to assess the extent to which the intervention components are implemented as planned. Summative evaluation measures the extent to which programme goals and objectives were achieved and the intermediate and longer-term impact of the programme. The programme evaluation should be a combination of qualitative and quantitative methods, allowing triangulation of the data.

Structural evaluation

Structural evaluation is used to determine the structure of the organization and will often use both institutional documents, such as organizational charts (business plans, financial information on resources), staffing (staffing ratios) and interviews with key personnel. Structure involves official descriptions of the programme and the environment. It will also involve the logic of the programme, the goals/milestones and values (Scriven 1991). It would include a description of the structure and facilities of a clinic, the staffing, the equipment, support services, location and appearance of the clinic (Rossi et al. 1999).

Process evaluation

Process evaluation moves from what is stated on paper to what actually occurs in practice. Here evaluators will often make extensive use of one-on-one qualitative interviews with the participants in the programme (staff) but also with other key stakeholders at each site. They might also use ethnographic observation. The key is to distinguish what people say – the rhetoric – from what they do – the reality.
Often it will involve a two-stage process evaluation. It begins with an initial assessment to identify how the programme really operates (the lived programme as opposed to the programme on paper) and to establish a baseline to evaluate change over the life of the programme. This can be followed with a more expansive evaluation to identify potential mechanisms that may help account for why the programme has been more or less successful, why it has achieved the outcomes it has or the barriers that interfere with its success.
Process evaluation also includes ‘the interactions between the health care providers and patients over time’ (Steinwachs & Hughes 2008 p. 5). It can look at treatment over time, relate treatment to complaints/diagnoses, look at the number of services and ‘provides insights into the timeliness of care, organizational responsiveness, and efficiency’ (Steinwachs & Hughes 2008 p. 5). As these authors note:
evaluation of the process of care can be done by applying the six goals for health care quality. Was the patient’s safety protected (i.e. were there adverse events due to medical errors or errors of omission)? Was care timely and not delayed or denied? Were the diagnosis and treatments provided consistent with scientific evidence and best professional practice? Was the care patient-centered? Were services provided efficiently? Was the care provided equitable?(Steinwachs & Hughes 2008 p. 5).

Outcome evaluation

Outcome programme evaluation will involve both quantitative measures (such as repeated use of a programme; satisfaction scores; spending patterns; functional patient measures; health status outcomes; health-related quality of life; biological markers) and qualitative data from interviews with participants (patients) and staff (stakeholders). The outcomes measured will be highly dependent on the type of programme and its objectives.
Outcome measures will often involve comparing what the original objectives were, particularly if these were related to needs assessment, with what has been achieved. Wherever possible this should be done with objective measures but it will also involve qualitative measures. For each objective or goal it is necessary to operationalize an indicator that can act as a measurement that would represent the successful achievement of the objective/goal. This would also incorporate the earlier comment about including the philosophical model of the program. If this model subscribes to holistic health care, to what extent is the care delivered holistic? If it claims to be IM, to what extent is it integrated?
Two types of outcomes that are frequently included in outcome assessments, in contrast to clinical outcomes, are changes in beliefs/attitudes/perceptions versus changes in behaviour. In the area of CAM and IM both would be considered significant measures, as would be spiritual health.
Within the field of programme evaluation there are numerous models of what outcomes are important to measure. One approach stresses determining the merit, worth and significance of the programme (Scriven 1991). Outcomes here would include direct and indirect outcomes, intended and unintended, immediate or long-term, side-effects and economic outcomes (cost verses benefits). Some approaches assess the outcomes from the point of view of policy (Henry & Melvin 2003). Still others use a theory-driven evaluation (Donaldson & Scriven 2002). Fetterman (2002) evaluates outcomes in terms of empowerment.
At its best, outcome evaluation should be useful to those in the programme (Donaldson 2001). A method which is used to help participants build a better programme is appreciative enquiry (Fetterman 2002). Such an approach is based on discovering the unique factors within a programme (leadership, relationships, culture, structure, rewards) that bring about the outcomes and envisaging a different outcome or programme but building on the current one.

Conclusion

It is clear that there is much in HSR that commends it to those who are either involved in, or want to see, more rigorous research on CAM but who find the current evidence-based practice model, with its hierarchy of knowledge and its reliance on efficacy studies and the RCT, as either an incomplete or inadequate model for CAM research. For those who are interested in a whole-systems model and systems theory and complexity theory, while it is clear at a theoretical level this is a fruitful way to go, no one has yet delineated a simple analytical model for applying it so that variables as diverse as biological markers to ethnographic observations can be included within the one model. We are suggesting here that programme evaluation offers an already established, very pragmatic way of moving towards whole-systems research (Jonas et al. 2006). It is an established field with a variety of theoretical models at its disposal; it combines both quantitative and qualitative data; it looks at the three crucial elements of structure, process and outcomes and under those categories can capture all the elements that CAM providers have identified as important and look at their interactions; its ultimate purpose is better outcomes for patients; it captures the contextual data necessary for replicating programmes; it deals with the real world of clinical practice; it studies effectiveness, not efficacy; it has a track record of applications; and it is used by decision-makers to make health policy.
We are left with the question of how we evaluate the evaluations themselves. Are there ways in which we can judge rigorous evaluations from less rigorous evaluations?Djulbegovic et al. (2006) suggest that Wilson’s term ‘consilience’ provides a way of conceptualizing the bringing together of knowledge to overcome the fragmentation of knowledge in contemporary science. ‘The consilience is a test of truth of the theory or interpretation of evidence. The “consilience test” takes place when findings obtained from one class of facts coincide with findings obtained from a different class of observations’ (Djulbegovic et al. 2006 p. 105). In the social sciences this would be termed triangulation, the use of bodies of data from independent sources and/or using different methodologies. So, within a single case study we can compare the data collected from qualitative methods with those collected from qualitative sources and compare data from medical records with data collected from questionnaires or interviews. Across case studies we can compare the findings to determine if consistent linkages between, say, structures and outcomes are being found.
Gartlehner et al. (2006) have devised a set of criteria that can be used to evaluate effectiveness studies to distinguish between efficacy and effectiveness trials. The latter measures the degree of benefit under real-world clinical practice. They recommend seven criteria for effectiveness trials: (1) populations in effectiveness studies should reflect the initial care facilities to a diverse population; (2) eligibility criteria must allow the population to reflect the homogeneity of the actual population (i.e. their comorbidities, compliance rates, use of other medications or therapies); (3) health outcomes that are relevant to the condition of interest should be the principal measures; (4) study duration should mimic a minimum length of treatment in a clinical setting to allow the assessment of outcomes and compliance should be an outcome measures, not an inclusion/exclusion criterion; (5) adverse event assessments should be limited to critical issues; (6) the sample size should be sufficient to detect at least a minimally important difference on a health-related quality of life scale; and (7) statistical analysis should not exclude patients with protocol deviations, compliance issues, adverse events, drug regimens, comorbidities and concomitant treatments. In testing these criteria the authors found an interrater reliability of 78.3% using experts from the North American-funded Evidence-based Practice Centers to judge both efficacy and effectiveness studies. This work demonstrates that formal criteria can be established to rate the quality of effectiveness and/or evaluation studies (see Chapter 1 for quality criteria scales for evaluating effectiveness research). Thus, HSR has an important role in understanding CAM and needs more emphasis in health care.
References
Andersen, R.M.; Davidson, P.L.; Ganz, P.A., Symbiotic relationships of quality of life, health services research and other health research, Qual. Life Res. 3 (5) (1994) 365371.
Beckman, J.F.; Fernandez, C.E.; Coulter, I.D., A systems model of health care: a proposal, J. Manipulative Physiol. Ther. 19 (3) (1996) 208215.
Bell, I.R.; Caspi, O.; Schwartz, G.E.; et al., Integrative medicine and systemic outcomes research: issues in the emergence of a new model for primary health care, Arch. Intern. Med. 162 (2) (2002) 133140.
Brindis, C.; Hughes, D.C.; Halfon, N.; et al., The use of formative evaluation to assess integrated services for children. The Robert Wood Johnson Foundation Child Health Initiative, Eval. Health Prof. 21 (1) (1998) 6690.
Brown, G.C.; Brown, M.M.; Sharma, S., Health care in the 21st century: evidence-based medicine, patient preference-based quality, and cost effectiveness, Qual. Manag. Health Care 9 (1) (2000) 2331.
Clancy, C.M.; Kamerow, D.B., Evidence-based medicine meets cost-effectiveness analysis, JAMA 276 (4) (1996) 329330.
Committee on the Quality of Care in America, Crossing the Quality Chasm: A New Health Care System for the 21st Century. (2001) Institute of Medicine National Academy Press, Washington, DC.
Coulter, I.D., Evidence-based dentistry and health services research: is one possible without the other?J. Dent. Educ. 65 (8) (2001) 714724.
Coulter, I.D., Competing views of chiropractic: health services research versus ethnographic observation, In: (Editors: Oths, K.S.; Hinojosa, H.Z.) Healing by Hand: Manual Medicine and Bonesetting in Global Perspective (2004) AltaMira Press, Walnut Creek, CA.
Coulter, I.D., Evidence summaries and synthesis: necessary but insufficient approach for determining clinical practice of integrated medicine?Integr. Cancer Ther. 5 (4) (2006) 282286.
Coulter, I.D., Evidence based complementary and alternative medicine: promises and problems, Forsch. Komplementmed. 14 (2) (2007) 102108.
Coulter, I.D.; Khorsan, R., Is health services research the Holy Grail of complementary and alternative medicine research?Altern. Ther. Health Med. 14 (4) (2008) 4045.
Coulter, I.D.; Ellison, M.A.; Hilton, L.; et al., Hospital-Based Integrative Medicine: A Case Study of the Barriers and Factors Facilitating the Creation of a Center. RAND MG-591-NCCAM. (2007) RAND Health, Santa Monica, CA.
Djulbegovic, B.; Morris, L.; Lyman, G., Evidentiary Challenges to Evidence-based Medicine, J. Eval. Clin. Pract. 2 (2006) 99100.
Donaldson, S.I., Overcoming our negative reputation: Evaluation becomes known as a helping profession, American Journal of Evaluation 22 (2001) 355361.
Donaldson, S.I.; Scriven, M., Evaluating social programs and problems: Visions for the new millennium. (2002) Erlbaum, Mahwah, NJ.
Fetterman, D.M., Empowerment evaluation: building communities of practice and a culture of learning, Am. J. Community Psychol. 30 (1) (2002) 89102.
Gartlehner, G.; Hansen, R.A.; Nissman, D.; et al., A simple and valid tool distinguished efficacy from effectiveness studies, J. Clin. Epidemiol. 59 (10) (2006) 10401048.
Gray, B.H.; Gusmano, M.K.; Collins, S.R., AHCPR and the changing politics of health services research, Health Aff. (Millwood) (Suppl) (2003); Web Exclusives, W3-283–307.
Henry, G.T.; Melvin, M.M., Beyond Use: Understanding Evaluation’s Influence on Attitudes and Actions, American Journal of Evaluation 24 (3) (2003) 293314.
Herman, P.M.; D’Huyvetter, K.; Mohler, M.J., Are health services research methods a match for CAM?Altern. Ther. Health Med. 12 (3) (2006) 7883.
Institute of Medicine, Health Services Research: Report of a Study. (1979) National Academy of Sciences Press, Washington, DC; No. 78-06.
Israel, B.A.; Cummings, K.M.; Dignan, M.B.; et al., Evaluation of health education programs: current assessment and future directions, Health Educ. Q. 22 (3) (1995) 364389.
Jinnett, K.; Coulter, I.; Koegel, P., Cases, context and care: the need for grounded network analysis, In: (Editors: Levy, J.A.; Pescosolido, B.A.) Advances in Medical Sociology, vol. 8. Social Networks and Health (2002) Elsevier Science, Oxford, UK.
Jonas, W.B., Building an evidence house: challenges and solutions to research in complementary and alternative medicine, Forsch. Komplementarmed. Klass. Naturheilkd. 12 (3) (2005) 159167.
Jonas, W.B.; Linde, K., Conducting and Evaluating Clinical Research in Complementary and Alternative Medicine, In: (Editor: Gallin, J.I.) Principles and Practice of Clinical Research (2002) Academic Press, New York, pp. 401426.
Jonas, W.B.; Beckner, W.; Coulter, I.D., Proposal for an integrated evaluation model for the study of whole systems health care in cancer, Integr. Cancer Ther. 5 (4) (2006) 315319.
Kay, E.; Blinkhorn, A., Dental health services research: what is it and does it matter?Br. Dent. J. 180 (3) (1996) 116117.
Kernick, D., Wanted–new methodologies for health service research. Is complexity theory the answer?Fam. Pract. 23 (3) (2006) 385390.
Lohr, K.N.; Steinwachs, D.M., Health services research: an evolving definition of the field, Health Serv. Res. 37 (1) (2002) 79.
Miles, M.B.; Huberman, A.M., Qualitative Data Analysis: An expanded sourcebook. second ed. (1994) Sage Publications, Thousand Oaks, CA.
Mootz, R.D.; Coulter, I.D.; Hansen, D.T., Health services research related to chiropractic: review and recommendations for research prioritization by the chiropractic profession, J. Manipulative Physiol. Ther. 20 (3) (1997) 201217.
Mootz, R.D.; Hansen, D.T.; Breen, A.; et al., Health services research related to chiropractic: review and recommendations for research prioritization by the chiropractic profession, J. Manipulative Physiol. Ther. 29 (9) (2006) 707725.
Patton, M.Q., Qualitative evaluation and research methods. second ed. (1990) Sage Publications, Newbury Park, CA.
Rossi, P.H.; Freeman, H.E.; Lipsey, M.W., Evaluation: A Systematic Approach. (1999) Sage Publications, Thousand Oaks, CA.
Scheirer, M.A., In: Designing and Using Process Evaluation. Handbook of Practical Program Evaluation (1994) Jossey-Bass, San Francisco, pp. 4068.
Scriven, M., Evaluation Thesaurus. (1991) Sage Publication, Newbury Park, CA.
Steinwachs, D.M.; Hughes, R.G., Chapter 8: Health Services Research: Scope and Significance, In: (Editor: Hughes, R.G.) Patient safety and quality: An evidence-based handbook for nurses (2008) Agency for Healthcare Research and Quality, Rockville, MD, pp. 115; AHRQ Publication No 08-0043.
Van Maanen, J., Qualitative Methodology. (1979) Sage Publications, Beverly Hills, CA.
Verhoef, M.J.; Lewith, G.; Ritenbaugh, C.; et al., Complementary and alternative medicine whole systems research: beyond identification of inadequacies of the RCT, Complement. Ther. Med. 13 (3) (2005) 206212.
Walach, H.; Falkenberg, T.; Fonnebo, V.; et al., Circular instead of hierarchical: methodological principles for the evaluation of complex interventions, BMC Med. Res. Methodol. 6 (2006) 29.
Yin, R.K., Case Study Research. Design and Methods. second ed. (1994) Sage, Newbury Park, CA.
Yin, R.K.; Heald, K.A., Using The Case Survey Method To Analyse Policy Studies, Adm. Sci. Q. 20 (1975) 371381.

Share this: