4. Laboratory research and biomarkers
Hakima Amri, Mones Abu-Asab, Wayne B. Jonas and John A. Ives
Chapter contents
Introduction73
CAM research: reverse-course hypothesis75
Experimental design78
Cellular models80
Animal models81
Impact of cutting-edge technologies on CAM research82
High-throughput systems of analysis: the omics82
Role of biomarkers in CAM basic research89
Standards, quality and application in CAM basic science research89
Toward standards in CAM laboratory research93
Conclusion93
Introduction
Laboratory research in complementary and alternative medicine (CAM) is a challenging topic due to the complexity of biological systems, the heterogeneity of CAM modalities and the diversity in their application. Scientists are well aware of the complexity of biological systems. To compensate for this complexity biological scientists attempt to isolate single components and study them outside their milieu. This has led to simplifying the research hypothesis for determining the effect of a single chemical enzymatic reaction, the impact of loss or gain of a substrate on a cellular pathway or the effect of a receptor/ligand on molecular signals. Extrapolations and correlations of the findings to a disease state or a specific pathophysiological pathway are then drawn. The experiments are performed in a test tube using a cocktail of ingredients, on cell or tissue extracts or on laboratory animals. Laboratory research like this is often translated to clinical research where trials are performed on human subjects. Thus, laboratory research encompasses both basic science and the clinical aspects of investigative science. Answering scientific questions by extrapolating from less to more complex systems has been the approach in conventional basic science research. This approach has led to the deciphering of important action mechanisms at cellular, molecular and genetic levels.
However, are these mechanisms reported to the scientific community in a comprehensive framework? In most cases the answer is no. Each new discovery is reported in the respective research field, to the respective experts in the field, to answer one specific question. Science has gradually become so compartmentalized and scientists so specialized that if you want to know more about a specific oncogene, for example, there is only one expert for you to contact. Clearly this is not an optimal approach as there are about 24 500 protein-coding genes and around 3400 cell lines from over 80 different species, including 950 cancer cell lines held at the ATCC Global Bioresource Center (Clamp et al. 2007) (http://www.atcc.org). The problem is exacerbated with highly specialized journals, which scientists from other fields often do not even read. Furthermore, standards and validation of procedures are still at the centre of scientific debates, especially at a time when leading-edge technology is advancing faster than their laboratory application. All these factors lead one to conclude that laboratory research has become so scattered that it is a challenge to paint a big picture that could be translated to the clinic.
In these times of ‘compartmentalized’ science there is a crying need for a comprehensive framework. It is within this context that CAM finds itself and through its renaissance putting pressure on decades of this unchallenged conventional scientific construct. To achieve a successful CAM renaissance, however, we must strive for the quality of evidence reached in conventional biomedical research over the decades (Jonas 2005). To establish a laboratory science, like any other biomedical endeavour, CAM must successfully pass the main domains of evidence checkpoints: hypothesis-driven experimental design, model validity, specificity, reproducibility and action mechanism or conceptual framework defining dependent and independent variables. This is especially difficult because CAM in general and CAM laboratory research in particular are adding layers of complexity, heterogeneity and diversity to the already fragmented conventional scientific paradigm. The validation of CAM modalities requires the most rigorous scientific testing and clinical proof involving product standardization, innovative biological assays, animal models, novel approach to clinical trials, as well as bioinformatics and statistical analyses (Yuan & Lin 2000).
Is there a process that could reconcile both sides of CAM and conventional science, bringing together a scientific community that values open-mindedness and constructive criticism and take advantage of the state-of-the art technological advances science has successfully accomplished? We believe so. Recent advances in bioinformatics, biotechnology and biomedical research tools encourage us to predict that CAM laboratory research is positioned to move from cataloguing phenomenology to developing novel and innovative paradigms that could benefit both houses of 21st-century medicine.
This chapter addresses the laboratory design and methodology used to measure changes in an organ or whole organism in response to a CAM modality. These methods cover tissue culture and animal models, integrative approaches of data generation from proteomics, mass spectrometry, genomic microarray, metabolomics, nuclear magnetic resonance and imaging, as well as analytical methods for the identification of novel effects of CAM modalities and that may qualify as biomarkers. We also provide the CAM researcher with an overview of the challenges faced when designing bench experiments and a general idea about the cutting-edge technology currently available to carry out evidence-based laboratory science in CAM.
CAM research: reverse-course hypothesis
Reports on the efficacy of CAM modalities to treat disorders and chronic diseases vary considerably in terms of their quality and scientific rigour (Giordano et al., 2003 and Jonas, 2005). This can be attributed to the lack of systematic and scientific basis for most of these modalities, as well as the wide variability of responses and outcomes often seen in CAM practices. Anecdotal data of a modality’s success alone do not meet the standards of evidence-based science and it is unlikely that experimental laboratory research alone will provide adequate substantiation for some CAM modalities (Moffett et al. 2006).
Some hold the assumption that CAM research should not necessarily be hypothesis-driven. This feeling is associated with the fact that many modalities of CAM have been practised for hundreds of years and thus the feeling that is there is no need for research because the clinical practice already exists. However, this runs counter to evidence-based medicine and the scientific process itself. Scientific research is hypothesis-driven. A properly designed experiment will attempt to falsify a null hypothesis and support an alternative hypothesis – the study hypothesis. The latter is an educated guess based on observation or preliminary data. Formulating hypotheses in CAM research is, in many cases, perceived as unnecessary because there already exist hundreds of years of CAM practice and experience.
The goal of many active research programmes in CAM today is to attempt to develop mechanistic understandings of CAM modalities through systematic scientific research that goes beyond anecdotal reports. With this in mind, CAM laboratory research is emulating the already-established conventional science paradigm, i.e. relating the effect of a particular active ingredient in a plant extract to its action on a specific biochemical pathway in vivo and thus elucidate its potential mechanism of action. The effects of Ginkgo biloba and St John’s wort on humans have been relatively well described. Specifically, Ginkgo biloba has been shown to have an impact on blood circulation (McKenna et al., 2001 and Wu et al., 2008) while St John’s wort appears to improve mild depression (Kasper et al., 2006 and Kasper et al., 2008). In addition, these two plants have often been studied in rat models where various mechanisms are being investigated but, in most cases, with a weak or no link to the original health conditions for which these plants are prescribed (Amri et al., 1996, Pretner et al., 2006, Hammer et al., 2008, Higuchi et al., 2008, Ivetic et al., 2008 and Lee et al., 2008). As of January 2010, a PubMed search using the key words ‘Ginkgo biloba and rats’ and ‘St John’s wort and rats’ yielded 533 and 233 entries, respectively, covering a variety of mechanisms of action.
The usual progression in biomedical research is to start with a cell-free or tissue culture model, progress to intact or genetically engineered animal models, advance to pilot studies in humans and, finally, run tests in more elaborate clinical trials. In much of CAM research taking place today we are seeing this paradigm run in reverse. Often, in a CAM research laboratory, the starting assumption is that the CAM modality under study has worked for hundreds of years in humans and now there is an attempt to understand the mechanism of action. To do so, the researchers first organize human pilot studies, followed by the use of animal models and then conclude with research to define action mechanisms at the cellular and molecular levels using test tubes – the reverse of the usual order (Figure 4.1) (Fonnebo et al. 2007).
FIGURE 4.1 |
This viewpoint is illustrated in the studies performed by Jacobs et al., 2000 and Jacobs et al., 2006. They performed two separate clinical studies of homeopathy and homeopathic principles. In one, the homeopathic treatment of children with diarrhoea was tailored to their symptom constellations. In the other, the subjects were given the same homeopathic regimen ignoring symptomatic variability. Although these studies tested assumptions within homeopathy they were designed without clearly elucidated mechanisms to construct hypotheses around. As another example, Ginkgo biloba is already used by thousands of patients in the hope that it will improve memory and counter some of the other effects of ageing, though a recent clinical trial demonstrated no effect on Alzheimer’s disease (DeKosky et al. 2008). The mechanisms by which effects might occur are not understood. Because of this gap Amri et al., 1997 and Amri et al., 2003 studied the action mechanisms of Ginkgo biloba. They demonstrated the beneficial effects of Ginkgo biloba in controlling the corticosterone synthesis by the adrenal gland (cortisol equivalent in humans) in rats in vivo, ex vivo and in vitro at the molecular and gene regulation levels. The authors also showed that the whole extract had better effects than its isolated components by reducing corticosterone levels while keeping adrenocorticotrophic hormone levels low, which indicates that Ginkgo biloba whole extract affected the adrenal–pituitary negative-feedback loop (Figure 4.2).
FIGURE 4.2 |
In both examples the standard experimental design has been altered in order to respect the CAM treatment efficacy premise and the laboratory research rigour. In the case of the homeopathic clinical trials, the investigators administered individualized remedies, which is a deviation from the conventional trials where one pharmaceutical drug is given to all subjects. The effect of Ginkgo biloba extract, on the other hand, while clinically tested in humans, has almost all of its mechanistic research done in the animal and cellular models, as well as the molecular aspect to elucidate its mechanism of action.
One of the big problems within CAM research is combining what are often old traditions within CAM practices, such as the assignment of treatment based upon traditional – often not scientifically validated – diagnostic techniques such as taking of pulses in traditional Chinese medicine (TCM), or the symptom constellation used in homeopathy, with modern reductionist scientific approaches. Because of this, the field of CAM research is in great need of established standards without compromising the CAM practice itself but performing the research within a rigorous scientific framework. We also believe that in this era of postgenome and systems biology, laboratory testing of CAM modalities has not yet fully employed integrative high-throughput methods, such as microarray gene expression and protein mass spectrometry. These techniques enable the screening of large numbers of specimens at significantly improved rates and can provide the blueprint for finding potential biomarkers (Cho 2007; Abu-Asab et al. 2008).
Experimental design
When setting up a study, a significant effort should be placed at the early planning stage to avoid potential biases during the conduct of the study and in order to produce meaningful data and analysis. The planning starts by clearly defining the objectives of the study, then outlining the set of experiments that the researcher will conduct to test their hypothesis. The experimental design should include all the elements, conditions and relations of the consequences; these include: (1) selecting the number of subjects – the sample, the study group or the study collection; (2) pairing or grouping of subjects; (3) identifying non-experimental factors and methods to control them (variables); (4) selecting and validating instruments to measure outcomes; (5) determining the duration of the experiment – endpoints must be suitably defined; and (6) deciding on the analytical paradigm for the analysis of the collected data.
Incorporating randomization in experiments is necessary to eliminate experimenter bias. This entails randomly assigning objects or individuals by chance to an experimental group. It is the most reliable process for generating homogeneous treatment groups free from any potential biases. However randomization alone is not sufficient to guarantee that treatment groups are as similar as possible. An experiment should include a sufficient number of subjects to have adequate ‘power’ to provide statistically meaningful results. That is, can we trust the results to be a good representation of the class of subjects under study, i.e. do we believe the results?
The replication of an experiment is required to show that the effectiveness of a treatment is true and not due to random occurrence. Replication increases the robustness of experimental results, their significance and confidence in their conclusions.
The question of how many replicates should be used in an experiment is an important topic that has been addressed many times and continues to be revisited periodically by statisticians whenever there is the introduction of new technologies such as microarray. The goal is to encompass the variation spectrum that may occur in a population or class; or, in the case of treatment effectiveness, to reach statistical significance. According to its website, the National Center for Complementary and Alternative Medicine (NCCAM) has concluded, based on discussions with statisticians, peer-reviewed articles, simulations and data from studies, that in microarray studies about 30 patients per class are needed to develop classifiers for biomarkers discovery (Pontzer & Johnson 2007). However, the necessary sample sizes vary for each research project depending on the conditions of the experiment (e.g. inbred (mice, rats) versus outbred populations (humans)) (Wei et al. 2004). There are published formulas and web-based programs for calculating the required study size in order to ensure that a study is properly powered (Dobbin et al. 2008).
Selection of the control group is one of the most crucial aspects of study design, especially when testing complex phenomena in the laboratory. Controls determine which part of a theoretical model is tested. For example, in acupuncture there is the ritual, needle, point and sensation to expectancy or conditioned response: which is the most important aspect of the treatment? In homeopathy or herbal therapy, is it a specific chemical, preparation, process, combination, dose or sequence of delivery? Each of the above requires different controls. Thus, selecting the control is selecting the theory. Likewise, results should state precisely what aspect of the theory is elucidated by the control treatment result differences. Additionally, the need for a control group cannot be overemphasized. This group is included to avoid experimental bias and placebo effects, focus the hypothesis and serve as a baseline for detecting differences.
This discussion outlines the standard paradigm for laboratory research in general where guidelines and standards have been established decades ago by specialized institutions such as the Clinical and Laboratory Standards Institute (CLSI) and the Clinical Laboratory Improvement Amendments (CLIA) to ensure quality standards for laboratory testing and promote accuracy, as well as for inter- and intralaboratory reproducibility. In general CAM laboratory research has not reached these levels of standardization and regulation, at least in the USA. However, applying these guidelines and standards to CAM research exposes their inadequacies and confirms the complexity of CAM modalities. Nevertheless, we believe that conducting CAM basic science research in the 21st century using newly developed as well as existing approved standards from the conventional field in conjunction with cutting-edge technology will benefit and advance CAM research.
Basic science research and CAM
Standardization dilemma in basic science research
For over a century, conventional scientific research has undergone phases of trial and error that led to establishing guidelines and standards. As a result, scientific research as conducted today is becoming high-quality, rigorous and reproducible. Tremendous effort and resources have been dedicated to developing analytical methodologies and sophisticated technologies to bring conventional scientific research to this level of standing. CAM research, however, has not received similar attention, primarily due to a confluence of historic, political and economic forces and additionally due to the complexity and heterogeneity of its constructs around mode of action. Many of these do not conform to the reductionism paradigm that dominates conventional research (Kurakin, 2005 and Kurakin, 2007).
The challenges in CAM clinical research design are covered in other sections of this book. CAM basic science research faces a number of experimental hurdles, starting with hypothesis generation based upon controversial putative cellular and molecular mechanisms and extending to choice of biological model, dosage, treatment frequency and analytical methodology, to name a few. These are not, in principle, any different from the hurdles faced in conventional research. However, as mentioned above, there is a century or more of historical experience behind the conventional approaches and models and only a few decades of serious attention to CAM modalities of healing and their possible mechanistic underpinnings. The need for standards, inherent difficulties notwithstanding, makes it essential that these issues be addressed.
Each experimental model has its strengths and weaknesses. For example, translating in vitro data to in vivo applications often does not work. This is true of all biomedical research, whether focused on CAM or not. However, it should be emphasized that the researcher must develop an awareness of the limitations of the various available models. This is important, especially, for the interpretation of the data and/or their translation to clinical research. It is crucial that CAM researchers be cognizant of these limitations and always alert to consider the extra layer of complexity that is added by the CAM modality under study.
Cellular models
Cell lines in cultures, i.e. in vitro, are widely used in almost all fields of biomedical research. The relatively cheap cost and fast turnaround time of this technique have contributed to its ubiquity. Cell lines derived from all types of tissues and cancers are commercially available; in addition, researchers constantly introduce and characterize new cell lines. In vitro experiments, although easy to set up and carry out, are challenging in all fields of research. This is due, at least in part, to the variability among cell lines that arises from their susceptibility to genetic and phenotypic transformation and potential contamination during frequent serial passages. There have been a number of studies reporting discrepancies within cell lines with regard to their purported characteristics (Masters et al., 1988, Chen et al., 1989 and Dirks et al., 1999). One solution is to use primary cell cultures (from fresh normal tissues and not transformed to become immortal). These tend to have fewer artifacts compared to transformed immortal cell lines but efforts developing strategies to use them as a cell model have been hindered by cost and time (Zhang & Pasumarthi 2008).
Despite their limitations, most of our tumour biology and gene regulation knowledge has stemmed from studying tumour cell lines. This is underscored by the over 50,000 publications reporting use of HeLa cells and over 20,000 reports describing NIH/3T3 cell line use. It is generally accepted that monolayer, three-dimensional cultures, or xenografts, cannot entirely reproduce the biological events occurring in humans. It is, thus, a challenge to create experimental cell models that capture the complex biology and diversity of human beings (Chin & Gray 2008).
In basic science, animal models remain the ‘gold standard’ and the crucial step before using human subjects. Certain regulatory agencies in the USA, such as the Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA), are encouraging the substitution of the expensive, time-consuming and sometimes ethically problematic animal testing with the bacterial tier testing if human studies are already available (Marcus 2005). This strategy is applicable to many CAM studies where natural products have already been tested on human subjects.
Animal models
Animal use for food, transport, clothes and other products is as old as humanity itself. Their use in experimental research parallels the development of modern medicine, which had its roots in ancient cultures of Egypt, Greece, Rome, India and China. It is known that Aristotle and Hippocrates based their knowledge of structure and function of the human body in their respective Historia Animalium and Corpus Hippocraticum on dissections of animals. Galen carried out his experimental work on pigs, monkeys and dogs, which provided the fundamentals of medical knowledge in the centuries thereafter (Baumans 2004).
Today, animal models remain the gold standard in laboratory research for testing the efficacy and effects of treatments. In vivo animal models are used to validate in vitro data and both of these are a prerequisite for clinical trials. A wide spectrum of animals is used in basic science research. Examples include: Caenorhabditis elegans, zebrafish and Drosophila in developmental biology and genetic studies; mouse, rat, guinea pig, rabbit, cat, dog and monkey in general physiology, pharmacology and neuroscience. Each model presents its own advantages and disadvantages and should be evaluated first for cost, time effectiveness and ethical appropriateness and then for the related issues of handling and management in laboratory settings. Although many models are used today to answer fundamental scientific questions, the mouse remains the model of choice, in part because of the remarkable progress achieved in producing engineered models to mimic several pathologies.
Thirty years of cancer research and millions of dollars spent on clinical and basic science research produced the genetically engineered mouse model. There are models for almost every epithelial malignancy in humans, including transgenic, ‘knockout’ and ‘knockdown’ models. The scientific and technical ability to generate engineered animal models, where targeted conditional activation or silencing of genes and oncogenes is easily manipulated, confirms the remarkable progress achieved since Galen’s era. With the discovery in the 1980s that Myc expression caused breast adenocarcinoma in mice mammary epithelium, the study of genes and its application in genetic engineering has exploded (Meyer & Penn 2008).
The current science and capabilities of genetic engineering are testimony to the power of reductionist thinking. This has led to further entrenchment of the reductionist approach in biomedicine and the belief that all mechanisms of biology and disease can and must be understood – understanding the action mechanisms underlying tumorigenesis in order to develop the most effective treatment. Understanding action mechanisms confers validity and credibility on biomedical research. Thus, knowing that Myc activation can cause breast cancer demonstrates our understanding of mechanism. On the other hand, to say that we know a homeopathic remedy that reduces prostate tumour size, or we know acupuncture reduces stress, does not address the question of mechanism. The degree to which the mechanism underlying the observed effect is understood often becomes the standard by which the quality of CAM research is judged. The conventional bias is that if the effect is confirmed, reproducible and mechanistically deciphered, then and only then will the scientific community accept it and the bench-to-bedside translation be implemented. Yet, this attitude is hypocritical.
Have we cured cancer? No. Have we deciphered cellular and molecular pathways leading to cancer development? The answer is yes. Have we developed anticancer drugs? The answer is also yes. Are they effective in curing cancer? Often, they are not. The current inability to cure cancer is one of the reasons for the observed increased use of CAM therapies among cancer patients (Miller et al., 2008 and Verhoef et al., 2008). Furthermore, as the axiom ‘one gene, one protein’ has turned out not to be true and biological systems have proved to be far more complex than originally envisioned, the ability for pharmaceutical researchers to perform targeted drug discovery has become increasingly difficult – perhaps even impossible. More recently, a call to adopt integrative approaches to basic science research and systems biology has emerged due, at least in part, to the technological developments in the genomics and proteomics fields (see below). We feel that there is great potential for advancement of CAM basic science research employing these paradigms and new technologies. There is a philosophical synergy among integrative approaches, systems biology and CAM. CAM modalities amenable to basic science research are integrative and often affect more than one pathway, meaning that today we should be able to demonstrate the complexity of the response using the latest technologies employed by conventional researchers (van der Greef and McBurney, 2005 and Verpoorte et al., 2009).
Impact of cutting-edge technologies on CAM research
This section outlines novel integrative technologies measuring almost all changes in an organ or whole organism in response to a treatment. Such integrative methodologies could provide comprehensive elucidation of the effects of a CAM modality. The conventional research community is slowly moving away from simplistic and reductionism thinking and towards an inclusive mapping of all changes in the biological system. This could only benefit both basic and clinical CAM research and we encourage using it. These innovative methodologies are described below.
High-throughput systems of analysis: the omics
New high-throughput systems of genome, proteome and metabolome analyses, such as microarray and mass spectrometry, are currently the best techniques that permit a large-scale assessment of whole-body response to treatment, while other traditional methods, such as enzyme-linked immunoassay and polymerase chain reaction, detect a limited number of gene and protein expressions at a time (Li 2007). The data produced from these newer techniques are useful for both target measurements (i.e. quantitative) and profiling (i.e. qualitative) (Abu-Asab et al. 2008). Although finding and quantifying change is achievable with the new omics, interpreting the results – establishing meaning and significance – and elucidating the pathways affected remains, as it always has been, the most difficult task a biomedical researcher faces.
We predict that, as high-throughput systems become widely used by CAM researchers, they will recast and energize CAM research because of the shared integrative characteristics of the two; they are both concerned with changes at the organismal level. CAM modalities affect systems biology of the whole organism and the high-throughput omics are the quantitative and qualitative measures of the whole systems’ change.
Public databases of high-throughput data are also a great source of data that can be used to test hypotheses in preparing research proposals or before embarking on expensive laboratory experiments. The largest, publicly available and diverse data warehouse is that of the National Center for Biotechnology Information (NCBI: http://www.ncbi.nlm.nih.gov/). NCBI creates and houses public databases, conducts research in computational biology, develops software tools for analysing genome data and disseminates biomedical information in order to contribute to the better understanding of molecular processes affecting human health and disease (Barrett et al. 2009).
Genomic analysis
Genomic analysis here refers to the integrative study of the gene expression alterations, before and after a treatment, that could be a CAM modality. We will focus here on microarray technology because it allows a total gene expression analysis and has the ability to reveal without prejudice the shifts in whole profile. For example, microarray gene expression profiling can identify genes whose expression has changed in response to a CAM modality by comparing gene expression in cells or tissues.
Microarray chips consist of small synthetic DNA fragments (also termed probes) immobilized in a specific arrangement on a coated solid surface such as glass, plastic or silicon. Each location is termed a feature. There are tens of thousands to millions of features on an array chip. Nucleic acids are extracted from specimens, labelled and then hybridized to an array. The amount of label, which corresponds to the amount of nucleic acid adhering to the probes, can be measured at each feature. This technique enables a number of applications on a whole-genome scale, such as gene and exon expression analysis, genotyping and resequencing. Microarray analysis can also be combined with chromatin immunoprecipitation for genome-wide identification of transcription factors and their binding sites.
Because of reproducibility issues in microarray experiments (variability between runs and between different laboratories), there are a few practical considerations in the experimental design that should be followed (Churchill, 2002 and Yang et al., 2008). First, replicas of experimental specimens should be included in the microarray analysis (i.e. treatment groups should include multiple subjects). Second, from each subject one duplicate should also be used – two separate microarray chips for the same subject. Third, microarray chips that include multiple spots for the same gene probe should be used. This feature allows the detection of experimental or chip manufacturing problems. These guidelines need to be followed in order to obtain reliable data.
Although overall gene expression is indicative of the holistic status of a tissue, the values of a gene of interest should be independently rechecked with other methods such as quantitative real-time (QRT)-PCR to ensure the accuracy of the microarray results. It is now a routine practice to remeasure the expression values of significant genes (differentially expressed) before publishing microarray results (Clarke & Zhu 2006). Gene expression microarray studies require the extraction of messenger ribonucleic acid (mRNA) from cells or tissues; therefore, it will be an invasive procedure when applied to human and animal subjects. However, it is appropriate to use in cancer treatment animal models, tissue culture experiments and comparative plant analysis. In order to avoid experimental variability between different runs it is preferred that all specimens be hybridized to chips at the same time. Finally, microarray experiments should be designed meticulously before executing to reduce mistakes, not only to increase scientific reliability but also to reduce the cost of microarray chips, which is still substantial (Hartmann 2005).
Microarray has been used by Gao et al. (2007)
Buy Membership for Complementary Medicine Category to continue reading. Learn more here