Screening and Prevention of Disease

Published on 05/04/2015 by admin

Filed under Internal Medicine

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1628 times

1  

The Practice of Medicine

The Editors


 

THE PHYSICIAN IN THE TWENTY-FIRST CENTURY

     No greater opportunity, responsibility, or obligation can fall to the lot of a human being than to become a physician. In the care of the suffering, [the physician] needs technical skill, scientific knowledge, and human understanding…. Tact, sympathy, and understanding are expected of the physician, for the patient is no mere collection of symptoms, signs, disordered functions, damaged organs, and disturbed emotions. [The patient] is human, fearful, and hopeful, seeking relief, help, and reassurance.

Harrison’s Principles of Internal Medicine, 1950

The practice of medicine has changed in significant ways since the first edition of this book appeared more than 60 years ago. The advent of molecular genetics, molecular and systems biology, and molecular pathophysiology; sophisticated new imaging techniques; and advances in bioinformatics and information technology have contributed to an explosion of scientific information that has fundamentally changed the way physicians define, diagnose, treat, and attempt to prevent disease. This growth of scientific knowledge is ongoing and accelerating.

The widespread use of electronic medical records and the Internet have altered the way doctors practice medicine and access and exchange information (Fig. 1-1). As today’s physicians strive to integrate copious amounts of scientific knowledge into everyday practice, it is critically important that they remember two things: first, that the ultimate goal of medicine is to prevent disease and treat patients; and second, that despite more than 60 years of scientific advances since the first edition of this text, cultivation of the intimate relationship between physician and patient still lies at the heart of successful patient care.

image

FIGURE 1-1   Woodcuts from Johannes de Ketham’s Fasciculus Medicinae, the first illustrated medical text ever printed, show methods of information access and exchange in medical practice during the early Renaissance. Initially published in 1491 for use by medical students and practitioners, Fasciculus Medicinae appeared in six editions over the next 25 years. Left: Petrus de Montagnana, a well-known physician and teacher at the University of Padua and author of an anthology of instructive case studies, consults medical texts dating from antiquity up to the early Renaissance. Right: A patient with plague is attended by a physician and his attendants. (Courtesy, U.S. National Library of Medicine.)

THE SCIENCE AND ART OF MEDICINE

Deductive reasoning and applied technology form the foundation for the solution to many clinical problems. Spectacular advances in biochemistry, cell biology, and genomics, coupled with newly developed imaging techniques, allow access to the innermost parts of the cell and provide a window into the most remote recesses of the body. Revelations about the nature of genes and single cells have opened a portal for formulating a new molecular basis for the physiology of systems. Increasingly, physicians are learning how subtle changes in many different genes can affect the function of cells and organisms. Researchers are deciphering the complex mechanisms by which genes are regulated. Clinicians have developed a new appreciation of the role of stem cells in normal tissue function; in the development of cancer, degenerative diseases, and other disorders; and in the treatment of certain diseases. Entirely new areas of research, including studies of the human microbiome, have become important in understanding both health and disease. The knowledge gleaned from the science of medicine continues to enhance physicians’ understanding of complex disease processes and provide new approaches to treatment and prevention. Yet skill in the most sophisticated applications of laboratory technology and in the use of the latest therapeutic modality alone does not make a good physician.

When a patient poses challenging clinical problems, an effective physician must be able to identify the crucial elements in a complex history and physical examination; order the appropriate laboratory, imaging, and diagnostic tests; and extract the key results from densely populated computer screens to determine whether to treat or to “watch.” As the number of tests increases, so does the likelihood that some incidental finding, completely unrelated to the clinical problem at hand, will be uncovered. Deciding whether a clinical clue is worth pursuing or should be dismissed as a “red herring” and weighing whether a proposed test, preventive measure, or treatment entails a greater risk than the disease itself are essential judgments that a skilled clinician must make many times each day. This combination of medical knowledge, intuition, experience, and judgment defines the art of medicine, which is as necessary to the practice of medicine as is a sound scientific base.

CLINICAL SKILLS

History-Taking   The written history of an illness should include all the facts of medical significance in the life of the patient. Recent events should be given the most attention. Patients should, at some early point, have the opportunity to tell their own story of the illness without frequent interruption and, when appropriate, should receive expressions of interest, encouragement, and empathy from the physician. Any event related by a patient, however trivial or seemingly irrelevant, may provide the key to solving the medical problem. In general, only patients who feel comfortable with the physician will offer complete information; thus putting the patient at ease to the greatest extent possible contributes substantially to obtaining an adequate history.

An informative history is more than an orderly listing of symptoms. By listening to patients and noting the way in which they describe their symptoms, physicians can gain valuable insight. Inflections of voice, facial expression, gestures, and attitude (i.e., “body language”) may offer important clues to patients’ perception of their symptoms. Because patients vary in their medical sophistication and ability to recall facts, the reported medical history should be corroborated whenever possible. The social history also can provide important insights into the types of diseases that should be considered. The family history not only identifies rare Mendelian disorders within a family but often reveals risk factors for common disorders, such as coronary heart disease, hypertension, and asthma. A thorough family history may require input from multiple relatives to ensure completeness and accuracy; once recorded, it can be updated readily. The process of history-taking provides an opportunity to observe the patient’s behavior and to watch for features to be pursued more thoroughly during the physical examination.

The very act of eliciting the history provides the physician with an opportunity to establish or enhance the unique bond that forms the basis for the ideal patient-physician relationship. This process helps the physician develop an appreciation of the patient’s view of the illness, the patient’s expectations of the physician and the health care system, and the financial and social implications of the illness for the patient. Although current health care settings may impose time constraints on patient visits, it is important not to rush the history-taking. A hurried approach may lead patients to believe that what they are relating is not of importance to the physician, and thus they may withhold relevant information. The confidentiality of the patient-physician relationship cannot be overemphasized.

Physical Examination   The purpose of the physical examination is to identify physical signs of disease. The significance of these objective indications of disease is enhanced when they confirm a functional or structural change already suggested by the patient’s history. At times, however, physical signs may be the only evidence of disease.

The physical examination should be methodical and thorough, with consideration given to the patient’s comfort and modesty. Although attention is often directed by the history to the diseased organ or part of the body, the examination of a new patient must extend from head to toe in an objective search for abnormalities. Unless the physical examination is systematic and is performed consistently from patient to patient, important segments may be omitted inadvertently. The results of the examination, like the details of the history, should be recorded at the time they are elicited—not hours later, when they are subject to the distortions of memory. Skill in physical diagnosis is acquired with experience, but it is not merely technique that determines success in eliciting signs of disease. The detection of a few scattered petechiae, a faint diastolic murmur, or a small mass in the abdomen is not a question of keener eyes and ears or more sensitive fingers but of a mind alert to those findings. Because physical findings can change with time, the physical examination should be repeated as frequently as the clinical situation warrants.

Given the many highly sensitive diagnostic tests now available (particularly imaging techniques), it may be tempting to place less emphasis on the physical examination. Indeed, many patients are seen by consultants after a series of diagnostic tests have been performed and the results are known. This fact should not deter the physician from performing a thorough physical examination since important clinical findings may have escaped detection by the barrage of prior diagnostic tests. The act of examining (touching) the patient also offers an opportunity for communication and may have reassuring effects that foster the patient-physician relationship.

Diagnostic Studies   Physicians rely increasingly on a wide array of laboratory tests to solve clinical problems. However, accumulated laboratory data do not relieve the physician from the responsibility of carefully observing, examining, and studying the patient. It is also essential to appreciate the limitations of diagnostic tests. By virtue of their impersonal quality, complexity, and apparent precision, they often gain an aura of certainty regardless of the fallibility of the tests themselves, the instruments used in the tests, and the individuals performing or interpreting the tests. Physicians must weigh the expense involved in laboratory procedures against the value of the information these procedures are likely to provide.

Single laboratory tests are rarely ordered. Instead, physicians generally request “batteries” of multiple tests, which often prove useful. For example, abnormalities of hepatic function may provide the clue to nonspecific symptoms such as generalized weakness and increased fatigability, suggesting a diagnosis of chronic liver disease. Sometimes a single abnormality, such as an elevated serum calcium level, points to a particular disease, such as hyperparathyroidism or an underlying malignancy.

The thoughtful use of screening tests (e.g., measurement of low-density lipoprotein cholesterol) may be of great value. A group of laboratory values can conveniently be obtained with a single specimen at relatively low cost. Screening tests are most informative when they are directed toward common diseases or disorders and when their results indicate whether other useful—but often costly—tests or interventions are needed. On the one hand, biochemical measurements, together with simple laboratory determinations such as blood count, urinalysis, and erythrocyte sedimentation rate, often provide a major clue to the presence of a pathologic process. On the other hand, the physician must learn to evaluate occasional screening-test abnormalities that do not necessarily connote significant disease. An in-depth workup after the report of an isolated laboratory abnormality in a person who is otherwise well is almost invariably wasteful and unproductive. Because so many tests are performed routinely for screening purposes, it is not unusual for one or two values to be slightly abnormal. Nevertheless, even if there is no reason to suspect an underlying illness, tests yielding abnormal results ordinarily are repeated to rule out laboratory error. If an abnormality is confirmed, it is important to consider its potential significance in the context of the patient’s condition and other test results.

The development of technically improved imaging studies with greater sensitivity and specificity proceeds apace. These tests provide remarkably detailed anatomic information that can be a pivotal factor in medical decision-making. Ultrasonography, a variety of isotopic scans, CT, MRI, and positron emission tomography have supplanted older, more invasive approaches and opened new diagnostic vistas. In light of their capabilities and the rapidity with which they can lead to a diagnosis, it is tempting to order a battery of imaging studies. All physicians have had experiences in which imaging studies revealed findings that led to an unexpected diagnosis. Nonetheless, patients must endure each of these tests, and the added cost of unnecessary testing is substantial. Furthermore, investigation of an unexpected abnormal finding may be associated with risk and/or expense and may lead to the diagnosis of an irrelevant or incidental problem. A skilled physician must learn to use these powerful diagnostic tools judiciously, always considering whether the results will alter management and benefit the patient.

PRINCIPLES OF PATIENT CARE

Evidence-Based Medicine   Evidence-based medicine refers to the making of clinical decisions that are formally supported by data, preferably data derived from prospectively designed, randomized, controlled clinical trials. This approach is in sharp contrast to anecdotal experience, which is often biased. Unless they are attuned to the importance of using larger, more objective studies for making decisions, even the most experienced physicians can be influenced to an undue extent by recent encounters with selected patients. Evidence-based medicine has become an increasingly important part of routine medical practice and has led to the publication of many practice guidelines.

Practice Guidelines   Many professional organizations and government agencies have developed formal clinical-practice guidelines to aid physicians and other caregivers in making diagnostic and therapeutic decisions that are evidence-based, cost-effective, and most appropriate to a particular patient and clinical situation. As the evidence base of medicine increases, guidelines can provide a useful framework for managing patients with particular diagnoses or symptoms. Clinical guidelines can protect patients—particularly those with inadequate health care benefits—from receiving substandard care. These guidelines also can protect conscientious caregivers from inappropriate charges of malpractice and society from the excessive costs associated with the overuse of medical resources. There are, however, caveats associated with clinical-practice guidelines since they tend to oversimplify the complexities of medicine. Furthermore, groups with different perspectives may develop divergent recommendations regarding issues as basic as the need for screening of women in their forties by mammography or of men over age 50 by serum prostate-specific antigen (PSA) assay. Finally, guidelines, as the term implies, do not—and cannot be expected to—account for the uniqueness of each individual and his or her illness. The physician’s challenge is to integrate into clinical practice the useful recommendations offered by experts without accepting them blindly or being inappropriately constrained by them.

Medical Decision-Making   Medical decision-making is an important responsibility of the physician and occurs at each stage of the diagnostic and therapeutic process. The decision-making process involves the ordering of additional tests, requests for consultations, and decisions about treatment and predictions concerning prognosis. This process requires an in-depth understanding of the pathophysiology and natural history of disease. As discussed above, medical decision-making should be evidence-based so that patients derive full benefit from the available scientific knowledge. Formulating a differential diagnosis requires not only a broad knowledge base but also the ability to assess the relative probabilities of various diseases. Application of the scientific method, including hypothesis formulation and data collection, is essential to the process of accepting or rejecting a particular diagnosis. Analysis of the differential diagnosis is an iterative process. As new information or test results are acquired, the group of disease processes being considered can be contracted or expanded appropriately.

Despite the importance of evidence-based medicine, much medical decision-making relies on good clinical judgment, an attribute that is difficult to quantify or even to assess qualitatively. Physicians must use their knowledge and experience as a basis for weighing known factors, along with the inevitable uncertainties, and then making a sound judgment; this synthesis of information is particularly important when a relevant evidence base is not available. Several quantitative tools may be invaluable in synthesizing the available information, including diagnostic tests, Bayes’ theorem, and multivariate statistical models. Diagnostic tests serve to reduce uncertainty about an individual’s diagnosis or prognosis and help the physician decide how best to manage that individual’s condition. The battery of diagnostic tests complements the history and the physical examination. The accuracy of a particular test is ascertained by determining its sensitivity (true-positive rate) and specificity (true-negative rate) as well as the predictive value of a positive and a negative result. Bayes’ theorem uses information on a test’s sensitivity and specificity, in conjunction with the pretest probability of a diagnosis, to determine mathematically the posttest probability of the diagnosis. More complex clinical problems can be approached with multivariate statistical models, which generate highly accurate information even when multiple factors are acting individually or together to affect disease risk, progression, or response to treatment. Studies comparing the performance of statistical models with that of expert clinicians have documented equivalent accuracy, although the models tend to be more consistent. Thus, multivariate statistical models may be particularly helpful to less experienced clinicians. See Chap. 3 for a more thorough discussion of decision-making in clinical medicine.

Electronic Medical Records   Both the growing reliance on computers and the strength of information technology now play central roles in medicine. Laboratory data are accessed almost universally through computers. Many medical centers now have electronic medical records, computerized order entry, and bar-coded tracking of medications. Some of these systems are interactive, sending reminders or warning of potential medical errors.

Electronic medical records offer rapid access to information that is invaluable in enhancing health care quality and patient safety, including relevant data, historical and clinical information, imaging studies, laboratory results, and medication records. These data can be used to monitor and reduce unnecessary variations in care and to provide real-time information about processes of care and clinical outcomes. Ideally, patient records are easily transferred across the health care system. However, technologic limitations and concerns about privacy and cost continue to limit broad-based use of electronic health records in many clinical settings.

As valuable as it is, information technology is merely a tool and can never replace the clinical decisions that are best made by the physician. Clinical knowledge and an understanding of a patient’s needs, supplemented by quantitative tools, still represent the best approach to decision-making in the practice of medicine.

Evaluation of Outcomes   Clinicians generally use objective and readily measurable parameters to judge the outcome of a therapeutic intervention. These measures may oversimplify the complexity of a clinical condition as patients often present with a major clinical problem in the context of multiple complicating background illnesses. For example, a patient may present with chest pain and cardiac ischemia, but with a background of chronic obstructive pulmonary disease and renal insufficiency. For this reason, outcome measures such as mortality, length of hospital stay, or readmission rates are typically risk-adjusted. An important point is that patients usually seek medical attention for subjective reasons; they wish to obtain relief from pain, to preserve or regain function, and to enjoy life. The components of a patient’s health status or quality of life can include bodily comfort, capacity for physical activity, personal and professional function, sexual function, cognitive function, and overall perception of health. Each of these important areas can be assessed through structured interviews or specially designed questionnaires. Such assessments provide useful parameters by which a physician can judge patients’ subjective views of their disabilities and responses to treatment, particularly in chronic illness. The practice of medicine requires consideration and integration of both objective and subjective outcomes.

Women’s Health and Disease   Although past epidemiologic studies and clinical trials have often focused predominantly on men, more recent studies have included more women, and some, like the Women’s Health Initiative, have exclusively addressed women’s health issues. Significant sex-based differences exist in diseases that afflict both men and women. Much is still to be learned in this arena, and ongoing studies should enhance physicians’ understanding of the mechanisms underlying these differences in the course and outcome of certain diseases. For a more complete discussion of women’s health, see Chap. 6e.

Care of the Elderly   The relative proportion of elderly individuals in the populations of developed nations has grown considerably over the past few decades and will continue to grow. The practice of medicine is greatly influenced by the health care needs of this growing demographic group. The physician must understand and appreciate the decline in physiologic reserve associated with aging; the differences in appropriate doses, clearance, and responses to medications; the diminished responses of the elderly to vaccinations such as those against influenza; the different manifestations of common diseases among the elderly; and the disorders that occur commonly with aging, such as depression, dementia, frailty, urinary incontinence, and fractures. For a more complete discussion of medical care for the elderly, see Chap. 11 and Part 5, Chaps. 93e and 94e.

Errors in the Delivery of Health Care   A 1999 report from the Institute of Medicine called for an ambitious agenda to reduce medical error rates and improve patient safety by designing and implementing fundamental changes in health care systems. Adverse drug reactions occur in at least 5% of hospitalized patients, and the incidence increases with the use of a large number of drugs. Whatever the clinical situation, it is the physician’s responsibility to use powerful therapeutic measures wisely, with due regard for their beneficial actions, potential dangers, and cost. It is the responsibility of hospitals and health care organizations to develop systems to reduce risk and ensure patient safety. Medication errors can be reduced through the use of ordering systems that rely on electronic processes or, when electronic options are not available, that eliminate misreading of handwriting. Implementation of infection control systems, enforcement of hand-washing protocols, and careful oversight of antibiotic use can minimize the complications of nosocomial infections. Central-line infection rates have been dramatically reduced at many centers by careful adherence of trained personnel to standardized protocols for introducing and maintaining central lines. Rates of surgical infection and wrong-site surgery can likewise be reduced by the use of standardized protocols and checklists. Falls by patients can be minimized by judicious use of sedatives and appropriate assistance with bed-to-chair and bed-to-bathroom transitions. Taken together, these and other measures are saving thousands of lives each year.

The Physician’s Role in Informed Consent   The fundamental principles of medical ethics require physicians to act in the patient’s best interest and to respect the patient’s autonomy. These requirements are particularly relevant to the issue of informed consent. Patients are required to sign a consent form for essentially any diagnostic or therapeutic procedure. Most patients possess only limited medical knowledge and must rely on their physicians for advice. Communicating in a clear and understandable manner, physicians must fully discuss the alternatives for care and explain the risks, benefits, and likely consequences of each alternative. In every case, the physician is responsible for ensuring that the patient thoroughly understands these risks and benefits; encouraging questions is an important part of this process. This is the very definition of informed consent. Full, clear explanation and discussion of the proposed procedures and treatment can greatly mitigate the fear of the unknown that commonly accompanies hospitalization. Excellent communication can also help alleviate misunderstandings in situations where complications of intervention occur. Often the patient’s understanding is enhanced by repeatedly discussing the issues in an unthreatening and supportive way, answering new questions that occur to the patient as they arise.

Special care should be taken to ensure that a physician seeking a patient’s informed consent has no real or apparent conflict of interest involving personal gain.

The Approach to Grave Prognoses and Death   No circumstance is more distressing than the diagnosis of an incurable disease, particularly when premature death is inevitable. What should the patient and family be told? What measures should be taken to maintain life? What can be done to maintain the quality of life?

Honesty is absolutely essential in the face of a terminal illness. The patient must be given an opportunity to talk with the physician and ask questions. A wise and insightful physician uses such open communication as the basis for assessing what the patient wants to know and when he or she wants to know it. On the basis of the patient’s responses, the physician can assess the right tempo for sharing information. Ultimately, the patient must understand the expected course of the disease so that appropriate plans and preparations can be made. The patient should participate in decision-making with an understanding of the goal of treatment (palliation) and its likely effects. The patient’s religious beliefs must be taken into consideration. Some patients may find it easier to share their feelings about death with their physician, who is likely to be more objective and less emotional, than with family members.

The physician should provide or arrange for emotional, physical, and spiritual support and must be compassionate, unhurried, and open. In many instances, there is much to be gained by the laying on of hands. Pain should be controlled adequately, human dignity maintained, and isolation from family and close friends avoided. These aspects of care tend to be overlooked in hospitals, where the intrusion of life-sustaining equipment can detract from attention to the whole person and encourage concentration instead on the life-threatening disease, against which the battle ultimately will be lost in any case. In the face of terminal illness, the goal of medicine must shift from cure to care in the broadest sense of the term. Primum succurrere, first hasten to help, is a guiding principle. In offering care to a dying patient, a physician must be prepared to provide information to family members and deal with their grief and sometimes their feelings of guilt or even anger. It is important for the doctor to assure the family that everything reasonable has been done. A substantial problem in these discussions is that the physician often does not know how to gauge the prognosis. In addition, various members of the health care team may offer different opinions. Good communication among providers is essential so that consistent information is provided to patients. This is especially important when the best path forward is uncertain. Advice from experts in palliative and terminal care should be sought whenever necessary to ensure that clinicians are not providing patients with unrealistic expectations. For a more complete discussion of end-of-life care, see Chap. 10.

THE PATIENT-PHYSICIAN RELATIONSHIP

     The significance of the intimate personal relationship between physician and patient cannot be too strongly emphasized, for in an extraordinarily large number of cases both the diagnosis and treatment are directly dependent on it. One of the essential qualities of the clinician is interest in humanity, for the secret of the care of the patient is in caring for the patient.

—Francis W. Peabody, October 21, 1925,
Lecture at Harvard Medical School

Physicians must never forget that patients are individual human beings with problems that all too often transcend their physical complaints. They are not “cases” or “admissions” or “diseases.” Patients do not fail treatments; treatments fail to benefit patients. This point is particularly important in this era of high technology in clinical medicine. Most patients are anxious and fearful. Physicians should instill confidence and offer reassurance but must never come across as arrogant or patronizing. A professional attitude, coupled with warmth and openness, can do much to alleviate anxiety and to encourage patients to share all aspects of their medical history. Empathy and compassion are the essential features of a caring physician. The physician needs to consider the setting in which an illness occurs—in terms not only of patients themselves but also of their familial, social, and cultural backgrounds. The ideal patient-physician relationship is based on thorough knowledge of the patient, mutual trust, and the ability to communicate.

The Dichotomy of Inpatient and Outpatient Internal Medicine   The hospital environment has changed dramatically over the last few decades. Emergency departments and critical care units have evolved to identify and manage critically ill patients, allowing them to survive formerly fatal diseases. At the same time, there is increasing pressure to reduce the length of stay in the hospital and to manage complex disorders in the outpatient setting. This transition has been driven not only by efforts to reduce costs but also by the availability of new outpatient technologies, such as imaging and percutaneous infusion catheters for long-term antibiotics or nutrition, minimally invasive surgical procedures, and evidence that outcomes often are improved by minimizing inpatient hospitalization.

In these circumstances, two important issues arise as physicians cope with the complexities of providing care for hospitalized patients. On the one hand, highly specialized health professionals are essential to the provision of optimal acute care in the hospital; on the other, these professionals—with their diverse training, skills, responsibilities, experiences, languages, and “cultures”—need to work as a team.

In addition to traditional medical beds, hospitals now encompass multiple distinct levels of care, such as the emergency department, procedure rooms, overnight observation units, critical care units, and palliative care units. A consequence of this differentiation has been the emergence of new trends, including specialties (e.g., emergency medicine and end-of-life care) and the provision of in-hospital care by hospitalists and intensivists. Most hospitalists are board-certified internists who bear primary responsibility for the care of hospitalized patients and whose work is limited entirely to the hospital setting. The shortened length of hospital stay that is now standard means that most patients receive only acute care while hospitalized; the increased complexities of inpatient medicine make the presence of a generalist with specific training, skills, and experience in the hospital environment extremely beneficial. Intensivists are board-certified physicians who are further certified in critical care medicine and who direct and provide care for very ill patients in critical care units. Clearly, then, an important challenge in internal medicine today is to ensure the continuity of communication and information flow between a patient’s primary care doctor and these physicians who are in charge of the patient’s hospital care. Maintaining these channels of communication is frequently complicated by patient “handoffs”—i.e., from the outpatient to the inpatient environment, from the critical care unit to a general medicine floor, and from the hospital to the outpatient environment. The involvement of many care providers in conjunction with these transitions can threaten the traditional one-to-one relationship between patient and primary care physician. Of course, patients can benefit greatly from effective collaboration among a number of health care professionals; however, it is the duty of the patient’s principal or primary physician to provide cohesive guidance through an illness. To meet this challenge, primary care physicians must be familiar with the techniques, skills, and objectives of specialist physicians and allied health professionals who care for their patients in the hospital. In addition, primary care doctors must ensure that their patients will benefit from scientific advances and from the expertise of specialists when they are needed both in and out of the hospital. Primary care physicians can also explain the role of these specialists to reassure patients that they are in the hands of the physicians best trained to manage an acute illness. However, the primary care physician should retain ultimate responsibility for making major decisions about diagnosis and treatment and should assure patients and their families that decisions are being made in consultation with these specialists by a physician who has an overall and complete perspective on the case.

A key factor in mitigating the problems associated with multiple care providers is a commitment to interprofessional teamwork. Despite the diversity in training, skills, and responsibilities among health care professionals, common values need to be reinforced if patient care is not to be adversely affected. This component of effective medical care is widely recognized, and several medical schools have integrated interprofessional teamwork into their curricula. The evolving concept of the “medical home” incorporates team-based primary care with linked subspecialty care in a cohesive environment that ensures smooth transitions of care cost-effectively.

Appreciation of the Patient’s Hospital Experience   The hospital is an intimidating environment for most individuals. Hospitalized patients find themselves surrounded by air jets, buttons, and glaring lights; invaded by tubes and wires; and beset by the numerous members of the health care team—hospitalists, specialists, nurses, nurses’ aides, physicians’ assistants, social workers, technologists, physical therapists, medical students, house officers, attending and consulting physicians, and many others. They may be transported to special laboratories and imaging facilities replete with blinking lights, strange sounds, and unfamiliar personnel; they may be left unattended at times; and they may be obligated to share a room with other patients who have their own health problems. It is little wonder that a patient’s sense of reality may be compromised. Physicians who appreciate the hospital experience from the patient’s perspective and who make an effort to develop a strong relationship within which they can guide the patient through this experience may make a stressful situation more tolerable.

Trends in the Delivery of Health Care: A Challenge to the Humane Physician   Many trends in the delivery of health care tend to make medical care impersonal. These trends, some of which have been mentioned already, include (1) vigorous efforts to reduce the escalating costs of health care; (2) the growing number of managed-care programs, which are intended to reduce costs but in which the patient may have little choice in selecting a physician or in seeing that physician consistently; (3) increasing reliance on technological advances and computerization for many aspects of diagnosis and treatment; and (4) the need for numerous physicians to be involved in the care of most patients who are seriously ill.

In light of these changes in the medical care system, it is a major challenge for physicians to maintain the humane aspects of medical care. The American Board of Internal Medicine, working together with the American College of Physicians–American Society of Internal Medicine and the European Federation of Internal Medicine, has published a Charter on Medical Professionalism that underscores three main principles in physicians’ contract with society: (1) the primacy of patient welfare, (2) patient autonomy, and (3) social justice. While medical schools appropriately place substantial emphasis on professionalism, a physician’s personal attributes, including integrity, respect, and compassion, also are extremely important. Availability to the patient, expression of sincere concern, willingness to take the time to explain all aspects of the illness, and a nonjudgmental attitude when dealing with patients whose cultures, lifestyles, attitudes, and values differ from those of the physician are just a few of the characteristics of a humane physician. Every physician will, at times, be challenged by patients who evoke strongly negative or positive emotional responses. Physicians should be alert to their own reactions to such patients and situations and should consciously monitor and control their behavior so that the patient’s best interest remains the principal motivation for their actions at all times.

An important aspect of patient care involves an appreciation of the patient’s “quality of life,” a subjective assessment of what each patient values most. This assessment requires detailed, sometimes intimate knowledge of the patient, which usually can be obtained only through deliberate, unhurried, and often repeated conversations. Time pressures will always threaten these interactions, but they should not diminish the importance of understanding and seeking to fulfill the priorities of the patient.

EXPANDING FRONTIERS IN MEDICAL PRACTICE

The Era of “Omics”: Genomics, Epigenomics, Proteomics, Microbiomics, Metagenomics, Metabolomics, Exposomics …   In the spring of 2003, announcement of the complete sequencing of the human genome officially ushered in the genomic era. However, even before that landmark accomplishment, the practice of medicine had been evolving as a result of the insights into both the human genome and the genomes of a wide variety of microbes. The clinical implications of these insights are illustrated by the complete genome sequencing of H1N1 influenza virus in 2009 and the rapid identification of H1N1 influenza as a potentially fatal pandemic illness, with swift development and dissemination of an effective protective vaccine. Today, gene expression profiles are being used to guide therapy and inform prognosis for a number of diseases, the use of genotyping is providing a new means to assess the risk of certain diseases as well as variations in response to a number of drugs, and physicians are better understanding the role of certain genes in the causality of common conditions such as obesity and allergies. Despite these advances, the use of complex genomics in the diagnosis, prevention, and treatment of disease is still in its early stages. The task of physicians is complicated by the fact that phenotypes generally are determined not by genes alone but by the interplay of genetic and environmental factors. Indeed, researchers have just begun to scratch the surface of the potential applications of genomics in the practice of medicine.

Rapid progress also is being made in other areas of molecular medicine. Epigenomics is the study of alterations in chromatin and histone proteins and methylation of DNA sequences that influence gene expression. Every cell of the body has identical DNA sequences; the diverse phenotypes a person’s cells manifest are the result of epigenetic regulation of gene expression. Epigenetic alterations are associated with a number of cancers and other diseases. Proteomics, the study of the entire library of proteins made in a cell or organ and its complex relationship to disease, is enhancing the repertoire of the 23,000 genes in the human genome through alternate splicing, posttranslational processing, and posttranslational modifications that often have unique functional consequences. The presence or absence of particular proteins in the circulation or in cells is being explored for diagnostic and disease-screening applications. Microbiomics is the study of the resident microbes in humans and other mammals. The human haploid genome has ~20,000 genes, while the microbes residing on and in the human body comprise over 3–4 million genes; the contributions of these resident microbes are likely to be of great significance with regard to health status. In fact, research is demonstrating that the microbes inhabiting human mucosal and skin surfaces play a critical role in maturation of the immune system, in metabolic balance, and in disease susceptibility. A variety of environmental factors, including the use and overuse of antibiotics, have been tied experimentally to substantial increases in disorders such as obesity, metabolic syndrome, atherosclerosis, and immune-mediated diseases in both adults and children. Metagenomics, of which microbiomics is a part, is the genomic study of environmental species that have the potential to influence human biology directly or indirectly. An example is the study of exposures to microorganisms in farm environments that may be responsible for the lower incidence of asthma among children raised on farms. Metabolomics is the study of the range of metabolites in cells or organs and the ways they are altered in disease states. The aging process itself may leave telltale metabolic footprints that allow the prediction (and possibly the prevention) of organ dysfunction and disease. It seems likely that disease-associated patterns will be sought in lipids, carbohydrates, membranes, mitochondria, and other vital components of cells and tissues. Finally, exposomics refers to efforts to catalogue and capture environmental exposures such as smoking, sunlight, diet, exercise, education, and violence, which together have an enormous impact on health. All of this new information represents a challenge to the traditional reductionist approach to medical thinking. The variability of results in different patients, together with the large number of variables that can be assessed, creates difficulties in identifying preclinical disease and defining disease states unequivocally. Accordingly, the tools of systems biology and network medicine are being applied to the enormous body of information now obtainable for every patient and may eventually provide new approaches to classifying disease. For a more complete discussion of a complex systems approach to human disease, see Chap. 87e.

The rapidity of these advances may seem overwhelming to practicing physicians. However, physicians have an important role to play in ensuring that these powerful technologies and sources of new information are applied with sensitivity and intelligence to the patient. Since “omics” are evolving so rapidly, physicians and other health care professionals must continue to educate themselves so that they can apply this new knowledge to the benefit of their patients’ health and well-being. Genetic testing requires wise counsel based on an understanding of the value and limitations of the tests as well as the implications of their results for specific individuals. For a more complete discussion of genetic testing, see Chap. 84.

The Globalization of Medicine   Physicians should be cognizant of diseases and health care services beyond local boundaries. Global travel has implications for disease spread, and it is not uncommon for diseases endemic to certain regions to be seen in other regions after a patient has traveled to and returned from those regions. In addition, factors such as wars, the migration of refugees, and climate change are contributing to changing disease profiles worldwide. Patients have broader access to unique expertise or clinical trials at distant medical centers, and the cost of travel may be offset by the quality of care at those distant locations. As much as any other factor influencing global aspects of medicine, the Internet has transformed the transfer of medical information throughout the world. This change has been accompanied by the transfer of technological skills through telemedicine and international consultation—for example, regarding radiologic images and pathologic specimens. For a complete discussion of global issues, see Chap. 2.

Medicine on the Internet   On the whole, the Internet has had a very positive effect on the practice of medicine; through personal computers, a wide range of information is available to physicians and patients almost instantaneously at any time and from anywhere in the world. This medium holds enormous potential for the delivery of current information, practice guidelines, state-of-the-art conferences, journal content, textbooks (including this text), and direct communications with other physicians and specialists, expanding the depth and breadth of information available to the physician regarding the diagnosis and care of patients. Medical journals are now accessible online, providing rapid sources of new information. By bringing them into direct and timely contact with the latest developments in medical care, this medium also serves to lessen the information gap that has hampered physicians and health care providers in remote areas.

Patients, too, are turning to the Internet in increasing numbers to acquire information about their illnesses and therapies and to join Internet-based support groups. Patients often arrive at a clinic visit with sophisticated information about their illnesses. In this regard, physicians are challenged in a positive way to keep abreast of the latest relevant information while serving as an “editor” as patients navigate this seemingly endless source of information, the accuracy and validity of which are not uniform.

A critically important caveat is that virtually anything can be published on the Internet, with easy circumvention of the peer-review process that is an essential feature of academic publications. Both physicians and patients who search the Internet for medical information must be aware of this danger. Notwithstanding this limitation, appropriate use of the Internet is revolutionizing information access for physicians and patients and in this regard represents a remarkable resource that was not available to practitioners a generation ago.

Public Expectations and Accountability   The general public’s level of knowledge and sophistication regarding health issues has grown rapidly over the last few decades. As a result, expectations of the health care system in general and of physicians in particular have risen. Physicians are expected to master rapidly advancing fields (the science of medicine) while considering their patients’ unique needs (the art of medicine). Thus, physicians are held accountable not only for the technical aspects of the care they provide but also for their patients’ satisfaction with the delivery and costs of care.

In many parts of the world, physicians increasingly are expected to account for the way in which they practice medicine by meeting certain standards prescribed by federal and local governments. The hospitalization of patients whose health care costs are reimbursed by the government and other third parties is subjected to utilization review. Thus, a physician must defend the cause for and duration of a patient’s hospitalization if it falls outside certain “average” standards. Authorization for reimbursement increasingly is based on documentation of the nature and complexity of an illness, as reflected by recorded elements of the history and physical examination. A growing “pay-for-performance” movement seeks to link reimbursement to quality of care. The goal of this movement is to improve standards of health care and contain spiraling health care costs. In many parts of the United States, managed (capitated) care contracts with insurers have replaced traditional fee-for-service care, placing the onus of managing the cost of all care directly on the providers and increasing the emphasis on preventive strategies. In addition, physicians are expected to give evidence of their current competence through mandatory continuing education, patient record audits, maintenance of certification, and relicensing.

Medical Ethics and New Technologies   The rapid pace of technological advance has profound implications for medical applications that go far beyond the traditional goals of disease prevention, treatment, and cure. Cloning, genetic engineering, gene therapy, human–computer interfaces, nanotechnology, and use of designer drugs have the potential to modify inherited predispositions to disease, select desired characteristics in embryos, augment “normal” human performance, replace failing tissues, and substantially prolong life span. Given their unique training, physicians have a responsibility to help shape the debate on the appropriate uses of and limits placed on these new techniques and to consider carefully the ethical issues associated with the implementation of such interventions.

The Physician as Perpetual Student   From the time doctors graduate from medical school, it becomes all too apparent that their lot is that of the “perpetual student” and that the mosaic of their knowledge and experiences is eternally unfinished. This realization is at the same time exhilarating and anxiety-provoking. It is exhilarating because doctors can apply constantly expanding knowledge to the treatment of their patients; it is anxiety-provoking because doctors realize that they will never know as much as they want or need to know. Ideally, doctors will translate the latter feeling into energy through which they can continue to improve themselves and reach their potential as physicians. It is the physician’s responsibility to pursue new knowledge continually by reading, attending conferences and courses, and consulting colleagues and the Internet. This is often a difficult task for a busy practitioner; however, a commitment to continued learning is an integral part of being a physician and must be given the highest priority.

The Physician as Citizen   Being a physician is a privilege. The capacity to apply one’s skills for the benefit of one’s fellow human beings is a noble calling. The doctor–patient relationship is inherently unbalanced in the distribution of power. In light of their influence, physicians must always be aware of the potential impact of what they do and say and must always strive to strip away individual biases and preferences to find what is best for the patient. To the extent possible, physicians should also act within their communities to promote health and alleviate suffering. Meeting these goals begins by setting a healthy example and continues in taking action to deliver needed care even when personal financial compensation may not be available.

A goal for medicine and its practitioners is to strive to provide the means by which the poor can cease to be unwell.

Learning Medicine   It has been a century since the publication of the Flexner Report, a seminal study that transformed medical education and emphasized the scientific foundations of medicine as well as the acquisition of clinical skills. In an era of burgeoning information and access to medical simulation and informatics, many schools are implementing new curricula that emphasize lifelong learning and the acquisition of competencies in teamwork, communication skills, system-based practice, and professionalism. These and other features of the medical school curriculum provide the foundation for many of the themes highlighted in this chapter and are expected to allow physicians to progress, with experience and learning over time, from competency to proficiency to mastery.

At a time when the amount of information that must be mastered to practice medicine continues to expand, increasing pressures both within and outside of medicine have led to the implementation of restrictions on the amount of time a physician-in-training can spend in the hospital. Because the benefits associated with continuity of medical care and observation of a patient’s progress over time were thought to be outstripped by the stresses imposed on trainees by long hours and by the fatigue-related errors they made in caring for patients, strict limits were set on the number of patients that trainees could be responsible for at one time, the number of new patients they could evaluate in a day on call, and the number of hours they could spend in the hospital. In 1980, residents in medicine worked in the hospital more than 90 hours per week on average. In 1989, their hours were restricted to no more than 80 per week. Resident physicians’ hours further decreased by ~10% between 1996 and 2008, and in 2010 the Accreditation Council for Graduate Medical Education further restricted (i.e., to 16 hours per shift) consecutive in-hospital duty hours for first-year residents. The impact of these changes is still being assessed, but the evidence that medical errors have decreased as a consequence is sparse. An unavoidable by-product of fewer hours at work is an increase in the number of “handoffs” of patient responsibility from one physician to another. These transfers often involve a transition from a physician who knows the patient well, having evaluated that individual on admission, to a physician who knows the patient less well. It is imperative that these transitions of responsibility be handled with care and thoroughness, with all relevant information exchanged and acknowledged.

Research, Teaching, and the Practice of Medicine   The word doctor is derived from the Latin docere, “to teach.” As teachers, physicians should share information and medical knowledge with colleagues, students of medicine and related professions, and their patients. The practice of medicine is dependent on the sum total of medical knowledge, which in turn is based on an unending chain of scientific discovery, clinical observation, analysis, and interpretation. Advances in medicine depend on the acquisition of new information through research, and improved medical care requires the transmission of that information. As part of their broader societal responsibilities, physicians should encourage patients to participate in ethical and properly approved clinical investigations if these studies do not impose undue hazard, discomfort, or inconvenience. However, physicians engaged in clinical research must be alert to potential conflicts of interest between their research goals and their obligations to individual patients. The best interests of the patient must always take priority.

     To wrest from nature the secrets which have perplexed philosophers in all ages, to track to their sources the causes of disease, to correlate the vast stores of knowledge, that they may be quickly available for the prevention and cure of disease—these are our ambitions.

—William Osler, 1849–1919


2

Global Issues in Medicine

Paul Farmer, Joseph Rhatigan


 

WHY GLOBAL HEALTH?


Global health is not a discipline; it is, rather, a collection of problems. Some scholars have defined global health as the field of study and practice concerned with improving the health of all people and achieving health equity worldwide, with an emphasis on addressing transnational problems. No single review can do much more than identify the leading problems in applying evidence-based medicine in settings of great poverty or across national boundaries. However, this is a moment of opportunity: only recently, persistent epidemics, improved metrics, and growing interest have been matched by an unprecedented investment in addressing the health problems of poor people in the developing world. To ensure that this opportunity is not wasted, the facts need to be laid out for specialists and laypeople alike. This chapter introduces the major international bodies that address health problems; identifies the more significant barriers to improving the health of people who to date have not, by and large, had access to modern medicine; and summarizes population-based data on the most common health problems faced by people living in poverty. Examining specific problems—notably HIV/AIDS (Chap. 226) but also tuberculosis (TB, Chap. 202), malaria (Chap. 248), and key “noncommunicable” chronic diseases (NCDs)—helps sharpen the discussion of barriers to prevention, diagnosis, and care as well as the means of overcoming them. This chapter closes by discussing global health equity, drawing on notions of social justice that once were central to international public health but had fallen out of favor during the last decades of the twentieth century.

A BRIEF HISTORY OF GLOBAL HEALTH INSTITUTIONS


Concern about health across national boundaries dates back many centuries, predating the Black Plague and other pandemics. One of the first organizations founded explicitly to tackle cross-border health issues was the Pan American Sanitary Bureau, which was formed in 1902 by 11 countries in the Americas. The primary goal of what later became the Pan American Health Organization was the control of infectious diseases across the Americas. Of special concern was yellow fever, which had been running a deadly course through much of South and Central America and halted the construction of the Panama Canal. In 1948, the United Nations formed the first truly global health institution: the World Health Organization (WHO). In 1958, under the aegis of the WHO and in line with a long-standing focus on communicable diseases that cross borders, leaders in global health initiated the effort that led to what some see as the greatest success in international health: the eradication of smallpox. Naysayers were surprised when the smallpox eradication campaign, which engaged public health officials throughout the world, proved successful in 1979 despite the ongoing Cold War.

At the International Conference on Primary Health Care in Alma-Ata (in what is now Kazakhstan) in 1978, public health officials from around the world agreed on a commitment to “Health for All by the Year 2000,” a goal to be achieved by providing universal access to primary health care worldwide. Critics argued that the attainment of this goal by the proposed date was impossible. In the ensuing years, a strategy for the provision of selective primary health care emerged that included four inexpensive interventions collectively known as GOBI: growth monitoring, oral rehydration, breast-feeding, and immunizations for diphtheria, whooping cough, tetanus, polio, TB, and measles. GOBI later was expanded to GOBI-FFF, which also included female education, food, and family planning. Some public health figures saw GOBI-FFF as an interim strategy to achieve “health for all,” but others criticized it as a retreat from the bolder commitments of Alma-Ata.

The influence of the WHO waned during the 1980s. In the early 1990s, many observers argued that, with its vastly superior financial resources and its close—if unequal—relationships with the governments of poor countries, the World Bank had eclipsed the WHO as the most important multilateral institution working in the area of health. One of the stated goals of the World Bank was to help poor countries identify “cost-effective” interventions worthy of public funding and international support. At the same time, the World Bank encouraged many of those nations to reduce public expenditures in health and education in order to stimulate economic growth as part of (later discredited) structural adjustment programs whose restrictions were imposed as a condition for access to credit and assistance through international financial institutions such as the World Bank and the International Monetary Fund. There was a resurgence of many diseases, including malaria, trypanosomiasis, and schistosomiasis, in Africa. TB, an eminently curable disease, remained the world’s leading infectious killer of adults. Half a million women per year died in childbirth during the last decade of the twentieth century, and few of the world’s largest philanthropic or funding institutions focused on global health equity.

HIV/AIDS, first described in 1981, precipitated a change. In the United States, the advent of this newly described infectious killer marked the culmination of a series of events that discredited talk of “closing the book” on infectious diseases. In Africa, which would emerge as the global epicenter of the pandemic, HIV disease strained TB control programs, and malaria continued to take as many lives as ever. At the dawn of the twenty-first century, these three diseases alone killed nearly 6 million people each year. New research, new policies, and new funding mechanisms were called for. The past decade has seen the rise of important multilateral global health financing institutions such as the Global Fund to Fight AIDS, Tuberculosis, and Malaria; bilateral efforts such as the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR); and private philanthropic organizations such as the Bill & Melinda Gates Foundation. With its 193 member states and 147 country offices, the WHO remains important in matters relating to the cross-border spread of infectious diseases and other health threats. In the aftermath of the epidemic of severe acute respiratory syndrome in 2003, the WHO’s International Health Regulations—which provide a legal foundation for that organization’s direct investigation into a wide range of global health problems, including pandemic influenza, in any member state—were strengthened and brought into force in May 2007.

Even as attention to and resources for health problems in poor countries grow, the lack of coherence in and among global health institutions may undermine efforts to forge a more comprehensive and effective response. The WHO remains underfunded despite the ever-growing need to engage a wider and more complex range of health issues. In another instance of the paradoxical impact of success, the rapid growth of the Gates Foundation, which is one of the most important developments in the history of global health, has led some foundations to question the wisdom of continuing to invest their more modest resources in this field. This indeed may be what some have called “the golden age of global health,” but leaders of major organizations such as the WHO, the Global Fund, the United Nations Children’s Fund (UNICEF), the Joint United Nations Programme on HIV/AIDS (UNAIDS), PEPFAR, and the Gates Foundation must work together to design an effective architecture that will make the most of opportunities to link new resources for and commitments to global health equity with the emerging understanding of disease burden and unmet need. To this end, new and old players in global health must invest heavily in discovery (relevant basic science), development of new tools (preventive, diagnostic, and therapeutic), and modes of delivery that will ensure the equitable provision of health products and services to all who need them.

THE ECONOMICS OF GLOBAL HEALTH

Political and economic concerns have often guided global health interventions. As mentioned, early efforts to control yellow fever were tied to the completion of the Panama Canal. However, the precise nature of the link between economics and health remains a matter for debate. Some economists and demographers argue that improving the health status of populations must begin with economic development; others maintain that addressing ill health is the starting point for development in poor countries. In either case, investment in health care, especially the control of communicable diseases, should lead to increased productivity. The question is where to find the necessary resources to start the predicted “virtuous cycle.”

During the past two decades, spending on health in poor countries has increased dramatically. According to a study from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington, total development assistance for health worldwide grew to $28.2 billion in 2010—up from $5.6 billion in 1990. In 2010, the leading contributors included U.S. bilateral agencies such as PEPFAR, the Global Fund, nongovernmental organizations (NGOs), the WHO, the World Bank, and the Gates Foundation. It appears, however, that total development assistance for health plateaued in 2010, and it is unclear whether growth will continue in the upcoming decade.

To reach the United Nations Millennium Development Goals, which include targets for poverty reduction, universal primary education, and gender equality, spending in the health sector must be increased above the 2010 levels. To determine by how much and for how long, it is imperative to improve the ability to assess the global burden of disease and to plan interventions that more precisely match need.

MORTALITY AND THE GLOBAL BURDEN OF DISEASE

Refining metrics is an important task for global health: only recently have there been solid assessments of the global burden of disease. The first study to look seriously at this issue, conducted in 1990, laid the foundation for the first report on Disease Control Priorities in Developing Countries and for the World Bank’s 1993 World Development Report Investing in Health. Those efforts represented a major advance in the understanding of health status in developing countries. Investing in Health has been especially influential: it familiarized a broad audience with cost-effectiveness analysis for specific health interventions and with the notion of disability-adjusted life years (DALYs). The DALY, which has become a standard measure of the impact of a specific health condition on a population, combines absolute years of life lost and years lost due to disability for incident cases of a condition. (See Fig. 2-1 and Table 2-1 for an analysis of the global disease burden by DALYs.)

image

FIGURE 2-1   Global DALY (disability-adjusted life year) ranks for the top causes of disease burden in 1990 and 2010. COPD, chronic obstructive pulmonary disease. (Reproduced with permission from C Murray et al: Disability-adjusted life years [DALYs] for 291 diseases and injuries in 21 regions, 1990–2010: A systematic analysis for the Global Burden of Disease Study 2010. Lancet 380:2197–2223, 2012.)

TABLE 2-1

LEADING CAUSES OF DISEASE BURDEN, 2010

image

In 2012, the IHME and partner institutions began publishing results from the Global Burden of Diseases, Injuries, and Risk Factors Study 2010 (GBD 2010). GBD 2010 is the most comprehensive effort to date to produce longitudinal, globally complete, and comparable estimates of the burden of diseases, injuries, and risk factors. This report reflects the expansion of the available data on health in the poorest countries and of the capacity to quantify the impact of specific conditions on a population. It measures current levels and recent trends in all major diseases, injuries, and risk factors among 21 regions and for 20 age groups and both sexes. The GBD 2010 team revised and improved the health-state severity weight system, collated published data, and used household surveys to enhance the breadth and accuracy of disease burden data. As analytic methods and data quality improve, important trends can be identified in a comparison of global disease burden estimates from 1990 to 2010.

GLOBAL MORTALITY

Of the 52.8 million deaths worldwide in 2010, 24.6% (13 million) were due to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies—a marked decrease compared with figures for 1990, when these conditions accounted for 34% of global mortality. Among the fraction of all deaths related to communicable diseases, maternal and perinatal conditions, and nutritional deficiencies, 76% occurred in sub-Saharan Africa and southern Asia. While the proportion of deaths due to these conditions has decreased significantly in the past decade, there has been a dramatic rise in the number of deaths from NCDs, which constituted the top five causes of death in 2010. The leading cause of death among adults in 2010 was ischemic heart disease, accounting for 7.3 million deaths (13.8% of total deaths) worldwide. In high-income countries ischemic heart disease accounted for 17.9% of total deaths, and in developing (low- and middle-income) countries it accounted for 10.1%. It is noteworthy that ischemic heart disease was responsible for just 2.6% of total deaths in sub-Saharan Africa (Table 2-2). In second place—causing 11.1% of global mortality—was cerebrovascular disease, which accounted for 9.9% of deaths in high-income countries, 10.5% in developing countries, and 4.0% in sub-Saharan Africa. Although the third leading cause of death in high-income countries was lung cancer (accounting for 5.6% of all deaths), this condition did not figure among the top 10 causes in low- and middle-income countries. Among the 10 leading causes of death in sub-Saharan Africa, 6 were infectious diseases, with malaria and HIV/AIDS ranking as the dominant contributors to disease burden. In high-income countries, however, only one infectious disease—lower respiratory infection—ranked among the top 10 causes of death.

TABLE 2-2

LEADING CAUSES OF DEATH WORLDWIDE, 2010

image

The GBD 2010 found that the worldwide mortality figure among children <5 years of age dropped from 16.39 million in 1970 to 11.9 million in 1990 and to 6.8 million in 2010—a decrease that surpassed predictions. Of childhood deaths in 2010, 3.1 million (40%) occurred in the neonatal period. About one-third of deaths among children <5 years old occurred in southern Asia and almost one-half in sub-Saharan Africa; <1% occurred in high-income countries.

The global burden of death due to HIV/AIDS and malaria was on an upward slope until 2004; significant improvements have since been documented. Global deaths from HIV infection fell from 1.7 million in 2006 to 1.5 million in 2010, while malaria deaths dropped from 1.2 million to 0.98 million over the same period. Despite these improvements, malaria and HIV/AIDS continue to be major burdens in particular regions, with global implications. Although it has a minor impact on mortality outside sub-Saharan Africa and Southeast Asia, malaria is the eleventh leading cause of death worldwide. HIV infection ranked thirty-third in global DALYs in 1990 but was the fifth leading cause of disease burden in 2010, with sub-Saharan Africa bearing the vast majority of this burden (Fig. 2-1).

The world’s population is living longer: global life expectancy has increased significantly over the past 40 years from 58.8 years in 1970 to 70.4 years in 2010. This demographic change, accompanied by the fact that the prevalence of NCDs increases with age, is dramatically shifting the burden of disease toward NCDs, which have surpassed communicable, maternal, nutritional, and neonatal causes. By 2010, 65.5% of total deaths at all ages and 54% of all DALYs were due to NCDs. Increasingly, the global burden of disease comprises conditions and injuries that cause disability rather than death.

Worldwide, although both life expectancy and years of life lived in good health have risen, years of life lived with disability have also increased. Despite the higher prevalence of diseases common in older populations (e.g., dementia and musculoskeletal disease) in developed and high-income countries, best estimates from 2010 reveal that disability resulting from cardiovascular diseases, chronic respiratory diseases, and the long-term impact of communicable diseases was greater in low- and middle-income countries. In most developing countries, people lived shorter lives and experienced disability and poor health for a greater proportion of their lives. Indeed, 50% of the global burden of disease occurred in southern Asia and sub-Saharan Africa, which together account for only 35% of the world’s population.

HEALTH AND WEALTH

Clear disparities in burden of disease (both communicable and noncommunicable) across country income levels are strong indicators that poverty and health are inherently linked. Poverty remains one of the most important root causes of poor health worldwide, and the global burden of poverty continues to be high. Among the 6.7 billion people alive in 2008, 19% (1.29 billion) lived on less than $1.25 a day—one standard measurement of extreme poverty—and another 1.18 billion lived on $1.25 to $2 a day. Approximately 600 million children—more than 30% of those in low-income countries—lived in extreme poverty in 2005. Comparison of national health indicators with gross domestic product per capita among nations shows a clear relationship between higher gross domestic product and better health, with only a few outliers. Numerous studies have also documented the link between poverty and health within nations as well as across them.

RISK FACTORS FOR DISEASE BURDEN

The GBD 2010 study found that the three leading risk factors for global disease burden in 2010 were (in order of frequency) high blood pressure, tobacco smoking (including secondhand smoke), and alcohol use—a substantial change from 1990, when childhood undernutrition was ranked first. Though ranking eighth in 2010, childhood undernutrition remains the leading risk factor for death worldwide among children <5 years of age. In an era that has seen obesity become a major health concern in many developed countries—and the sixth leading risk factor worldwide—the persistence of undernutrition is surely cause for great consternation. Low body weight is still the dominant risk factor for disease burden in sub-Saharan Africa. Inability to feed the hungry reflects many years of failed development projects and must be addressed as a problem of the highest priority. Indeed, no health care initiative, however generously funded, will be effective without adequate nutrition.

In a 2006 publication that examined how specific diseases and injuries are affected by environmental risk, the WHO estimated that roughly one-quarter of the total global burden of disease, one-third of the global disease burden among children, and 23% of all deaths were due to modifiable environmental factors. Many of these factors lead to deaths from infectious diseases; others lead to deaths from malignancies. Etiology and nosology are increasingly difficult to parse. As much as 94% of diarrheal disease, which is linked to unsafe drinking water and poor sanitation, can be attributed to environmental factors. Risk factors such as indoor air pollution due to use of solid fuels, exposure to secondhand tobacco smoke, and outdoor air pollution account for 20% of lower respiratory infections in developed countries and for as many as 42% of such infections in developing countries. Various forms of unintentional injury and malaria top the list of health problems to which environmental factors contribute. Some 4 million children die every year from causes related to unhealthy environments, and the number of infant deaths due to environmental factors in developing countries is 12 times that in developed countries.

The second edition of Disease Control Priorities in Developing Countries, published in 2006, is a document of great breadth and ambition, providing cost-effectiveness analyses for more than 100 interventions and including 21 chapters focused on strategies for strengthening health systems. Cost-effectiveness analyses that compare relatively equivalent interventions and facilitate the best choices under constraint are necessary; however, these analyses are often based on an incomplete knowledge of cost and evolving evidence of effectiveness. As both resources and objectives for global health grow, cost-effectiveness analyses (particularly those based on older evidence) must not hobble the increased worldwide commitment to providing resources and accessible health care services to all who need them. This is why we use the term global health equity. To illustrate these points, it is instructive to look to HIV/AIDS, which in the course of the last three decades has become the world’s leading infectious cause of adult death.

HIV INFECTION/AIDS

Chapter 226 provides an overview of the HIV epidemic in the world today. Here the discussion will be limited to HIV/AIDS in the developing world. Lessons learned from tackling HIV/AIDS in resource-constrained settings are highly relevant to discussions of other chronic diseases, including NCDs, for which effective therapies have been developed.

Approximately 34 million people in all countries worldwide were living with HIV infection in 2011; more than 8 million of those in low- and middle-income countries were receiving antiretroviral therapy (ART)—a number representing a 20-fold increase over the corresponding figure for 2003. By the end of 2011, 54% of people eligible for treatment were receiving ART. (It remains to be seen how many of these people are receiving ART regularly and with the requisite social support.)

In the United States, the availability of ART has transformed HIV/AIDS from an inescapably fatal destruction of cell-mediated immunity into a manageable chronic illness. In high-income countries, improved ART has prolonged life by an estimated average of 35 years per patient—up from 6.8 years in 1993 and 24 years in 2006. This success rate exceeds that obtained with almost any treatment for adulthood cancer or for complications of coronary artery disease. In developing countries, treatment has been offered broadly only since 2003, and only in 2009 did the number of patients receiving treatment exceed 40% of the number who needed it. Before 2003, many arguments were raised to justify not moving forward rapidly with ART programs for people living with HIV/AIDS in resource-limited settings. The standard litany included the price of therapy compared with the poverty of the patient, the complexity of the intervention, the lack of infrastructure for laboratory monitoring, and the lack of trained health care providers. Narrow cost-effectiveness arguments that created false dichotomies—prevention or treatment rather than both—too often went unchallenged. As a cumulative result of these delays in the face of health disparities, old and new, there were millions of premature deaths.

Disparities in access to HIV treatment gave rise to widespread moral indignation and a new type of health activism. In several middle-income countries, including Brazil, public programs have helped bridge the access gap. Other innovative projects pioneered by international NGOs in diverse settings such as Haiti and Rwanda have established that a simple approach to ART that is based on intensive community engagement and support can achieve remarkable results (Fig. 2-2).

image

FIGURE 2-2   An HIV/TB-co-infected patient in Rwanda before (left) and after (right) 6 months of treatment.

During the past decade, the availability of ART has increased sharply in the low- and middle-income countries that have borne the greatest burden of the HIV/AIDS pandemic. In 2000, very few people living with HIV/AIDS in these nations had access to ART, whereas by 2011, as stated above, 8 million people, a majority of those deemed eligible, in these countries were receiving ART. This scale-up was made possible by a number of developments: a staggering drop in the cost of ART, the development of a standardized approach to treatment, substantial investments by funders, and the political commitment of governments to make ART available. Civil-society AIDS activists spurred many of these efforts.

Starting in the early 2000s, a combination of factors, including work by the Clinton Foundation HIV/AIDS Initiative and Médecins Sans Frontières, led to the availability of generic ART medications. While first-line ART cost more than $10,000 per patient per year in 2000, first-line regimens in low- and middle-income countries are now available for less than $100 per year. At the same time, fixed-dose combination drugs that are easier to administer have become more widely available.

Also around this time, the WHO began advocating a public health approach to the treatment of people with AIDS in resource-limited settings. This approach, derived from models of care pioneered by the NGO Partners In Health and other groups, proposed standard first-line treatment regimens based on a simple five-drug formulary, with a more complex (and more expensive) set of second-line options in reserve. Clinical protocols were standardized, and intensive training packages for health professionals and community health workers were developed and implemented in many countries. These efforts were supported by new funding from the World Bank, the Global Fund, and PEPFAR. In 2003, lack of access to ART was declared a global public health emergency by the WHO and UNAIDS, and those two agencies launched the “3 by 5 initiative,” setting an ambitious target: to have 3 million people in developing countries on treatment by the end of 2005. Worldwide funding for HIV/AIDS treatment increased dramatically during this period, rising from $300 million in 1996 to over $15 billion in 2010.

Many countries set corresponding national targets and have worked to integrate ART into their national AIDS programs and health systems and to harness the synergies between HIV/AIDS treatment and prevention activities. Further lessons with implications for policy and action have come from efforts now under way among lower-income countries. Rwanda provides an example: Over the past decade, mortality from HIV disease has fallen by >78% as the country—despite its relatively low gross national income (Fig. 2-3)—has provided almost universal access to ART. The reasons for this success include strong national leadership, evidence-based policy, cross-sector collaboration, community-based care, and a deliberate focus on a health system approach that embeds HIV/AIDS treatment and prevention in the primary health care service delivery platform. As we will discuss later in this chapter, these principles can be applied to other conditions, including NCDs.

image

FIGURE 2-3   Antiretroviral therapy (ART) coverage in sub-Saharan Africa, 2009.

TUBERCULOSIS

Chapter 202 provides a concise overview of the pathophysiology and treatment of TB. In 2011, an estimated 12 million people were living with active TB, and 1.4 million died from it. The disease is closely linked to HIV infection in much of the world: of the 8.7 million estimated new cases of TB in 2011, 1.2 million occurred among people living with HIV. Indeed, a substantial proportion of the TB resurgence registered in southern Africa is attributed to HIV co-infection. Even before the advent of HIV, however, it was estimated that fewer than one-half of all cases of TB in developing countries were ever diagnosed, much less treated. Primarily because of the common failure to diagnose and treat TB, international authorities devised a single strategy to reduce the burden of disease. In the early 1990s, the World Bank, the WHO, and other international bodies promoted the DOTS strategy (directly observed therapy using short-course isoniazid- and rifampin-based regimens) as highly cost-effective. Passive case-finding of smear-positive patients was central to the strategy, and an uninterrupted drug supply was, of course, deemed necessary for cure.

DOTS was clearly effective for most uncomplicated cases of drug-susceptible TB, but a number of shortcomings were soon identified. First, the diagnosis of TB based solely on sputum smear microscopy—a method dating from the late nineteenth century—is not sensitive. Many cases of pulmonary TB and all cases of exclusively extrapulmonary TB are missed by smear microscopy, as are most cases of active disease in children. Second, passive case-finding relies on the availability of health care services, which is uneven in the settings where TB is most prevalent. Third, patients with multidrug-resistant TB (MDR-TB) are by definition infected with strains of Mycobacterium tuberculosis resistant to isoniazid and rifampin; thus exclusive reliance on these drugs is unwarranted in settings in which drug resistance is an established problem.

The crisis of antibiotic resistance registered in U.S. hospitals is not confined to the industrialized world or to common bacterial infections. The great majority of patients sick with and dying from TB are afflicted with strains susceptible to all first-line drugs. In some settings, however, a substantial minority of patients with TB are infected with M. tuberculosis strains resistant to at least one first-line anti-TB drug. A 2012 article in a leading journal reported that, in China, 10% of all patients with TB and 26% of all previously treated patients were sick with MDR strains of M. tuberculosis. Most of these cases were the result of primary transmission. To improve DOTS-based responses to MDR-TB, global health authorities adopted DOTS-Plus, which adds the diagnostics and drugs necessary to manage drug-resistant disease. Even as DOTS-Plus was being piloted in resource-constrained settings, however, new strains of extensively drug-resistant (XDR) M. tuberculosis (resistant to isoniazid and rifampin, any fluoroquinolone, and at least one injectable second-line drug) had already threatened the success of TB control programs in beleaguered South Africa, for example, where high rates of HIV infection have led to a doubling of TB incidence over the last decade. Despite the poor capacity for detection of MDR- and XDR-TB in most resource-limited settings, an estimated 630,000 cases of MDR-TB were thought to occur in 2011. Approximately 9% of these drug-resistant cases were caused by XDR strains. It is clear that poor infection control in hospitals and clinics is associated with explosive and lethal epidemics due to these strains and that patients may be infected with multiple strains.

TUBERCULOSIS AND AIDS AS CHRONIC DISEASES: LESSONS LEARNED

Strategies effective against MDR-TB have implications for the management of drug-resistant HIV infection and even drug-resistant malaria, which, through repeated infections and a lack of effective therapy, has become a chronic disease in parts of Africa (see “Malaria,” below). As new therapies, whether for TB or for hepatitis C infection, become available, many of the problems encountered in the past will recur. Indeed, examining AIDS and TB as chronic diseases—instead of simply communicable diseases—makes it possible to draw a number of conclusions, many of them pertinent to global health in general.

First, the chronic infections discussed here are best treated with multidrug regimens to which the infecting strains are susceptible. This is true of chronic infections due to many bacteria, fungi, parasites, or viruses; even acute infections such as those caused by Plasmodium species are not reliably treated with a single drug.

Second, charging fees for AIDS prevention and care poses insurmountable problems for people living in poverty, many of whom are unable to pay even modest amounts for services or medications. Like efforts to battle airborne TB, such services might best be seen as a public good promoting public health. Initially, a subsidy approach will require sustained donor contributions, but many African countries have set targets for increased national investments in health—a pledge that could render ambitious programs sustainable in the long run, as the Rwanda experience suggests. Meanwhile, as local investments increase, the price of AIDS care is decreasing. The development of generic medications means that ART can now cost <$0.25 per day; costs continue to decrease.

Third, the effective scale-up of pilot projects requires strengthening and sometimes rebuilding of health care systems, including those charged with delivering primary care. In the past, the lack of health care infrastructure has been cited as a barrier to providing ART in the world’s poorest regions; however, AIDS resources, which are at last considerable, may be marshaled to rebuild public health systems in sub-Saharan Africa and other HIV-burdened regions—precisely the settings in which TB is resurgent.

Fourth, the lack of trained health care personnel, most notably doctors and nurses, in resource-poor settings must be addressed. This personnel deficiency is invoked as a reason for the failure to treat AIDS in poor countries. In what is termed the brain drain, many physicians and nurses emigrate from their home countries to pursue opportunities abroad, leaving behind health systems that are understaffed and ill equipped to deal with the epidemic diseases that ravage local populations. The WHO recommends a minimum of 20 physicians and 100 nurses per 100,000 persons, but recent reports from that organization and others confirm that many countries, especially in sub-Saharan Africa, fall far short of those target numbers. Specifically, more than one-half of those countries register fewer than 10 physicians per 100,000 population. In contrast, the United States and Cuba register 279 and 596 doctors per 100,000 population, respectively. Similarly, the majority of sub-Saharan African countries do not have even half of the WHO-recommended minimum number of nurses. Further inequalities in health care staffing exist within countries. Rural–urban disparities in health care personnel mirror disparities of both wealth and health. For instance, nearly 90% of Malawi’s population lives in rural areas, but more than 95% of clinical officers work at urban facilities, and 47% of nurses work at tertiary care facilities. Even community health workers trained to provide first-line services to rural populations often transfer to urban districts.

One reason doctors and nurses leave sub-Saharan Africa and other resource-poor areas is that they lack the tools to practice their trade there. Funding for “vertical” (disease-specific) programs can be used not only to strengthen health systems but to recruit and train physicians and nurses to underserved regions where they, in turn, can help to train and then work with community health workers in supervising care for patients with AIDS and many other diseases within their communities. Such training should be undertaken even where physicians are abundant, since close community-based supervision represents the highest standard of care for chronic disease, whether in developing or developed countries. The United States has much to learn from Rwanda.

Fifth, the barriers to adequate health care and patient adherence that are raised by extreme poverty can be removed only with the deployment of “wrap-around services”: food supplements for the hungry, help with transportation to clinics, child care, and housing. Extreme poverty makes it difficult for many patients to comply with therapy for chronic diseases, whether communicable or not. Indeed, poverty in its many dimensions is far and away the greatest barrier to the scale-up of treatment and prevention programs. In many rural regions of Africa, hunger is the major coexisting condition in patients with AIDS or TB, and those consumptive diseases cannot be treated effectively without adequate caloric intake.

Finally, there is a need for a renewed basic-science commitment to the discovery and development of vaccines; more reliable, less expensive diagnostic tools; and new classes of therapeutic agents. This need applies not only to the three leading infectious killers—against none of which is there an effective vaccine—but also to most other neglected diseases of poverty.

MALARIA

Chapter 248 reviews the etiology, pathogenesis, and clinical treatment of malaria, the world’s third-ranking infectious killer. Malaria’s human cost is enormous, with the highest toll among children—especially African children—living in poverty. In 2010, there were ~219 million cases of malaria, and the disease is thought to have killed 660,000 people; 86% of these deaths (~568,000) occurred among children <5 years old. The poor disproportionately experience the burden of malaria: more than 80% of estimated malaria deaths occur in just 14 countries, and mortality rates are highest in sub-Saharan Africa. The Democratic Republic of the Congo and Nigeria account for more than 40% of total estimated malaria deaths globally.

Microeconomic analyses focusing on direct and indirect costs estimate that malaria may consume >10% of a household’s annual income. A study in rural Kenya shows that mean direct-cost burdens vary between the wet and dry seasons (7.1% and 5.9% of total household expenditure, respectively) and that this proportion is >10% in the poorest households in both seasons. A Ghanaian study that categorized the population by income group highlighted the regressive nature of this cost: responding to malaria consumed only 1% of a wealthy family’s income but 34% of a poor household’s income.

Macroeconomic analyses estimate that malaria may reduce the per capita gross national product of a disease-endemic country by 50% relative to that of a non-malaria-endemic country. The causes of this drag include impaired cognitive development of children, decreased schooling, decreased savings, decreased foreign investment, and restriction of worker mobility. In light of this enormous cost, it is little wonder that an important review by the economists Sachs and Malaney concludes that “where malaria prospers most, human societies have prospered least.”

Rolling Back Malaria   In part because of differences in vector distribution and climate, resource-rich countries offer few blueprints for malaria control and treatment that are applicable in tropical (and resource-poor) settings. In 2001, African heads of state endorsed the WHO Roll Back Malaria (RBM) campaign, which prescribes strategies appropriate for sub-Saharan African countries. In 2008, the RBM partnership launched the Global Malaria Action Plan (GMAP). This strategy integrates prevention and care and calls for an avoidance of single-dose regimens and an awareness of existing drug resistance. The GMAP recommends a number of key tools to reduce malaria-related morbidity and mortality rates: the use of insecticide-treated bed nets (ITNs), indoor residual spraying, and artemisinin-based combination therapy (ACT) as well as intermittent preventive treatment during pregnancy, prompt diagnosis, and other vector control measures such as larviciding and environmental management.

INSECTICIDE-TREATED BED NETS    ITNs are an efficacious and cost-effective public health intervention. A meta-analysis of controlled trials in seven sub-Saharan African countries indicates that parasitemia prevalence is reduced by 24% among children <5 years of age who sleep under ITNs compared with that among those who do not. Even untreated nets reduce malaria incidence by one-quarter. On an individual level, the utility of ITNs extends beyond protection from malaria. Several studies suggest that ITNs reduce all-cause mortality among children under age 5 to a greater degree than can be attributed to the reduction in malarial disease alone. Morbidity (specifically that due to anemia), which predisposes children to diarrheal and respiratory illnesses and pregnant women to the delivery of low-birth-weight infants, also is reduced in populations using ITNs. In some areas, ITNs offer a supplemental benefit by preventing transmission of lymphatic filariasis, cutaneous leishmaniasis, Chagas’ disease, and tick-borne relapsing fever. At the community level, investigators suggest that the use of an ITN in just one household may reduce the number of mosquito bites in households up to a hundred meters away by reducing mosquito density. The cost of ITNs per DALY saved—estimated at $29—makes ITNs a good-value public health investment.

The WHO recommends that all individuals living in malaria-endemic areas sleep under protective ITNs. About 140 million long-lasting ITNs were distributed in high-burden African countries in 2006–2008, and rates of household ownership of ITNs in high-burden countries increased to 31%. Although the RBM partnership has seen modest success, the WHO’s 2009 World Malaria Report states that the percentage of children <5 years of age using an ITN (24%) remains well below the World Health Assembly’s target of 80%. Limited success in scaling up ITN coverage reflects the inadequately acknowledged economic barriers that prevent the destitute sick from gaining access to critical preventive technologies and the challenges faced in designing and implementing effective delivery platforms for these products. In other words, this is a delivery failure rather than a lack of knowledge of how best to reduce malaria deaths.

INDOOR RESIDUAL SPRAYING    Indoor residual spraying is one of the most common interventions for preventing the transmission of malaria in endemic areas. Vector control using insecticides approved by the WHO, including DDT, can effectively reduce or even interrupt malaria transmission. However, studies have indicated that spraying is effective in controlling malaria transmission only if most (~80%) of the structures in the targeted community are treated. Moreover, since a successful program depends on well-trained spraying teams as well as on effective monitoring and planning, indoor residual spraying is difficult to employ and is often reliant on health systems with a strong infrastructure. Regardless of the limitations of indoor residual spraying, the WHO recommends its use in combination with ITNs. Neither intervention alone is sufficient to prevent transmission of malaria entirely.

ARTEMISININ-BASED COMBINATION THERAPY    The emergence and spread of chloroquine resistance have increased the need for antimalarial combination therapy. To limit the spread of resistance, the WHO now recommends that only ACT (as opposed to artemisinin monotherapy) be used for uncomplicated falciparum malaria. Like that of other antimalarial interventions, the use of ACT has increased in the last few years, but coverage rates remain very low in several countries in sub-Saharan Africa. The RBM partnership has invested significantly in measures to enhance access to ACT by facilitating its delivery through the public health sector and developing innovative funding mechanisms (e.g., the Affordable Medicines Facility—malaria) that reduce its cost significantly so that ineffective monotherapies can be eliminated from the market.

In the last several years, resistance to antimalarial medicines and insecticides has become an even larger problem than in the past. In 2009, confirmation of artemisinin resistance was reported. Although the WHO has called for an end to the use of artemisinin monotherapy, the marketing of such therapies continues in many countries. Ongoing use of artemisinin monotherapy increases the likelihood of drug resistance, a deadly prospect that will make malaria far more difficult to treat.

Between 2001 and 2011, global malaria deaths were reduced by an estimated 38%, with reductions of ≥50% in 10 African countries as well as in most endemic countries in other regions. Again the experience in Rwanda is instructive: from 2005 to 2011, malaria deaths dropped by >85% for the same reasons mentioned earlier in recounting that nation’s successes in battling HIV.

Meeting the challenge of malaria control will continue to require careful study of appropriate preventive and therapeutic strategies in the context of an increasingly sophisticated molecular understanding of pathogen, vector, and host. However, an appreciation of the economic and social devastation wrought by malaria—like that inflicted by diarrhea, AIDS, and TB—on the most vulnerable populations should heighten the level of commitment to critical analysis of ways to implement proven strategies for prevention and treatment.

Funding from the Global Fund, the Gates Foundation, the World Bank’s International Development Association, and the U.S. President’s Malaria Initiative, along with leadership from public health authorities, is critical to sustain the benefits of prevention and treatment. Building on the growing momentum of the last decade with adequate financial support, innovative strategies, and effective tools for prevention, diagnosis, and treatment, we may one day achieve the goal of a world free of malaria.

“NONCOMMUNICABLE” CHRONIC DISEASES

Although the burden of communicable diseases—especially HIV infection, TB, and malaria—still accounts for the majority of deaths in resource-poor regions such as sub-Saharan Africa, 63% of all deaths worldwide in 2008 were held to be due to NCDs. Although we will use this term to describe cardiovascular diseases, cancers, diabetes, and chronic lung diseases, this usage masks important distinctions. For instance, two significant NCDs in low-income countries, rheumatic heart disease (RHD) and cervical cancer, represent the chronic sequelae of infections with group A Streptococcus and human papillomavirus, respectively. It is in these countries that the burden of disease due to NCDs is rising most rapidly. Close to 80% of deaths attributable to NCDs occur in low- and middle-income countries, where 86% of the global population lives. The WHO reports that ~25% of global NCD-related deaths take place before the age of 60—a figure representing ~5.7 million people and exceeding the total number of deaths due to AIDS, TB, and malaria combined. In almost all high-income countries, the WHO reported that NCD deaths accounted for ~70% of total deaths in 2008. By 2020, NCDs will account for 80% of the global burden of disease and for 7 of every 10 deaths in developing countries. The recent increase in resources for and attention to communicable diseases is both welcome and long overdue, but developing countries are already carrying a “double burden” of communicable and noncommunicable diseases.

Diabetes, Cardiovascular Disease, and Cancer: A Global Perspective   In contrast to TB, HIV infection, and malaria—diseases caused by single pathogens that damage multiple organs—cardiovascular diseases reflect injury to a single organ system downstream of a variety of insults, both infectious and noninfectious. Some of these insults result from rapid changes in diet and labor conditions.

Other insults are of a less recent vintage. The burden of cardiovascular disease in low-income countries represents one consequence of decades of neglect of health systems. Furthermore, cardiovascular research and investment have long focused on the ischemic conditions that are increasingly common in high- and middle-income countries. Meanwhile, despite awareness of its health impact in the early twentieth century, cardiovascular damage in response to infection and malnutrition has fallen out of view until recently.

The misperception of cardiovascular diseases as a problem primarily of elderly populations in middle- and high-income countries has contributed to the neglect of these diseases by global health institutions. Even in Eastern Europe and Central Asia, where the collapse of the Soviet Union was followed by a catastrophic surge in cardiovascular disease deaths (mortality rates from ischemic heart disease nearly doubled between 1991 and 1994 in Russia, for example), the modest flow of overseas development assistance to the health sector focused on the communicable causes that accounted for <1 in 20 excess deaths during that period.

DIABETES    The International Diabetes Federation reports that the number of diabetic patients in the world is expected to increase from 366 million in 2011 to 552 million by 2030. Already, a significant proportion of diabetic patients live in developing countries where, because those affected are far more frequently between ages 40 and 59, the complications of micro- and macrovascular disease take a far greater toll. Globally, these complications are a major cause of disability and reduced quality of life. A high fasting plasma glucose level alone ranks seventh among risks for disability and is the sixth leading risk factor for global mortality. The GBD 2010 estimates that diabetes accounted for 1.28 million deaths in 2010, with almost 80% of those deaths occurring in low- and middle-income countries.

Predictions of an imminent rise in the share of deaths and disabilities due to NCDs in developing countries have led to calls for preventive policies to improve diet, increase exercise, and restrict tobacco use, along with the prescription of multidrug regimens for persons at high-level vascular risk. Although this agenda could do much to prevent pandemic NCD, it will do little to help persons with established heart disease stemming from nonatherogenic pathologies.

CARDIOVASCULAR DISEASE    Because systemic investigation of the causes of stroke and heart failure in sub-Saharan Africa has begun only recently, little is known about the impact of elevated blood pressure in this portion of the continent. Modestly elevated blood pressure in the absence of tobacco use in populations with low rates of obesity may confer little risk of adverse events in the short run. In contrast, persistently elevated blood pressure above 180/110 goes largely undetected, untreated, and uncontrolled in this part of the world. In the cohort of men assessed in the Framingham Heart Study, the prevalence of blood pressures above 210/120—severe hypertension—declined from 1.8% in the 1950s to 0.1% by the 1960s with the introduction of effective antihypertensive agents. Although debate continues about appropriate screening strategies and treatment thresholds, rural health centers staffed largely by nurses must quickly gain access to essential antihypertensive medications.

The epidemiology of heart failure reflects inequalities in risk factor prevalence and in treatment. The reported burden of this condition has remained unchanged since the 1950s, but the causes of heart failure and the age of the people affected vary across the globe. Heart failure as a consequence of pericardial, myocardial, endocardial, or valvular injury accounts for as many as 5% of all medical admissions to hospitals around the world. In high-income countries, coronary artery disease and hypertension among the elderly account for most cases of heart failure. For example, in the United States, coronary artery disease is present in 60% of patients with heart failure and hypertension in 70%. Among the world’s poorest 1 billion people, however, heart failure reflects poverty-driven exposure of children and young adults to rheumatogenic strains of streptococci and cardiotropic microorganisms (e.g., HIV, Trypanosoma cruzi, enteroviruses, M. tuberculosis), untreated high blood pressure, and nutrient deficiencies. The mechanisms underlying other causes of heart failure common in these populations—such as idiopathic dilated cardiomyopathy, peripartum cardiomyopathy, and endomyocardial fibrosis—remain unclear.

In stark contrast to the extraordinary lengths to which clinicians in wealthy countries will go to treat ischemic cardiomyopathy, little attention has been paid to young patients with nonischemic cardiomyopathies in resource-poor settings. Nonischemic cardiomyopathies, such as those due to hypertension, RHD, and chronic lung disease, account for >90% of cases of cardiac failure in sub-Saharan Africa and include poorly understood entities such as peripartum cardiomyopathy (which has an incidence in rural Haiti of 1 per 300 live births) and HIV-associated cardiomyopathy. Multidrug regimens that include beta blockers, angiotensin-converting enzyme inhibitors, and other agents can dramatically reduce mortality risk and improve quality of life for these patients. Lessons learned in the scale-up of chronic care for HIV infection and TB may be illustrative as progress is made in establishing the means to deliver heart-failure therapies.

Some of the lessons learned from the chronic infections discussed above are, of course, relevant to cardiovascular disease, especially those classified as NCDs but caused by infectious pathogens. Integration of prevention and care remains as important today as in 1960 when Paul Dudley White and his colleagues found little evidence of myocardial infarction in the region near the Albert Schweitzer Hospital in Lambaréné, Gabon, but reported that “the high prevalence of mitral stenosis is astonishing…. We believe strongly that it is a duty to help bring to these sufferers the benefits of better penicillin prophylaxis and of cardiac surgery when indicated. The same responsibility exists for those with correctable congenital cardiovascular defects.”

RHD affects more than 15 million people worldwide, with more than 470,000 new cases each year. Among the 2.4 million annual cases of pediatric RHD, an estimated 42% occur in sub-Saharan Africa. This disease, which may cause endocarditis or stroke, leads to more than 345,000 deaths per year—almost all occurring in developing countries. Researchers in Ethiopia have reported annual death rates as high as 12.5% in rural areas. In part because the prevention of RHD has not advanced since the disease’s disappearance in wealthy countries, no part of sub-Saharan Africa has eradicated RHD despite examples of success in Costa Rica, Cuba, and some Caribbean nations. A survey of acute heart failure among adults in sub-Saharan Africa showed that ~14.3% of these cases were due to RHD.

Strategies to eliminate rheumatic heart disease may depend on active case-finding, with confirmation by echocardiography, among high-risk groups as well as on efforts to expand access to surgical interventions among children with advanced valvular damage. Partnerships between established surgical programs and areas with limited or nonexistent facilities may help expand the capacity to provide life-saving interventions to patients who otherwise would die early and painfully. A long-term goal is the establishment of regional centers of excellence equipped to provide consistent, accessible, high-quality services.

Clinicians from tertiary care centers in sub-Saharan Africa and elsewhere have continued to call for prevention and treatment of the cardiovascular conditions of the poor. The reconstruction of health services in response to pandemic infectious disease offers an opportunity to identify and treat patients with organ damage and to undertake the prevention of cardiovascular and other chronic conditions of poverty.

CANCER    Cancers account for ~5% of the global burden of disease. Low- and middle-income countries accounted for more than two-thirds of the 12.6 million cases and 7.6 million deaths due to cancer in 2008. By 2030, annual mortality from cancer will increase by 4 million—with developing countries experiencing a sharper increase than developed nations. “Western” lifestyle changes will be responsible for the increased incidence of cancers of the breast, colon, and prostate among populations in low- and middle-income countries, but historic realities, sociocultural and behavioral factors, genetics, and poverty itself also will have a profound impact on cancer-related mortality and morbidity rates. At least 2 million cancer cases per year—18% of the global cancer burden—are attributable to infectious causes, which are responsible for <10% of cancers in developed countries but account for up to 20% of all malignancies in low- and middle-income countries. Infectious causes of cancer such as human papillomavirus, hepatitis B virus, and Helicobacter pylori will continue to have a much larger impact in developing countries. Environmental and dietary factors, such as indoor air pollution and high-salt diets, also contribute to increased rates of certain cancers (e.g., lung and gastric cancers). Tobacco use (both smoking and chewing) is the most important source of increased mortality rates from lung and oral cancers. In contrast to decreasing tobacco use in many developed countries, the number of smokers is growing in developing countries, especially among women and young persons.

For many reasons, outcomes of malignancies are far worse in developing countries than in developed nations. As currently funded, overstretched health systems in poor countries are not capable of early detection; the majority of patients already have incurable malignancies at diagnosis. Treatment of cancers is available for only a very small number of mostly wealthy citizens in the majority of poor countries, and, even when treatment is available, the range and quality of services are often substandard. Yet this need not be the future. Only a decade ago, MDR-TB and HIV infection were considered untreatable in settings of great poverty. The feasibility of creating innovative programs that reduce technical and financial barriers to the provision of care for treatable malignancies among the world’s poorest populations is now clear (Fig. 2-4). Several middle-income countries, including Mexico, have expanded publicly funded cancer care to reach poorer populations. This commitment of resources has dramatically improved outcomes for cancers, from childhood leukemia to cervical cancer.

image

FIGURE 2-4   An 11-year-old Rwandan patient with embryonal rhabdomyosarcoma before (left) and after (right) 48 weeks of chemotherapy plus surgery. Five years later, she is healthy with no evidence of disease.

Prevention of Noncommunicable Diseases   False debates, including those pitting prevention against care, continue in global health and reflect, in part, outmoded paradigms or a partial understanding of disease burden and etiology as well as the dramatic variations in risk within a single nation. Moreover, debates are sometimes politicized as a result of vested interests. For example, in 2004, the WHO released its Global Strategy on Diet, Physical Activity, and Health, which focused on the population-wide promotion of healthy diet and regular physical activity in an effort to reduce the growing global problem of obesity. Passing this strategy at the World Health Assembly proved difficult because of strong opposition from the food industry and from a number of WHO member states, including the United States. Although globalization has had many positive effects, one negative effect has been the growth in both developed and developing countries of well-financed lobbies that have aggressively promoted unhealthy dietary changes and increased consumption of alcohol and tobacco. Foreign direct investment in tobacco, beverage, and food products in developing countries reached $90.3 billion in 2010—a figure nearly 490 times greater than the $185 million spent during that year to address NCDs by bilateral funding agencies, the WHO, the World Bank, and all other sources of development assistance for health combined. Investment in curbing NCDs remains disproportionately low despite the WHO’s 2008–2013 Action Plan for the Global Strategy for the Prevention and Control of Noncommunicable Diseases.

The WHO estimates that 80% of all cases of cardiovascular disease and type 2 diabetes as well as 40% of all cancers can be prevented through healthier diets, increased physical activity, and avoidance of tobacco. These estimates mask large local variations. Although some evidence indicates that population-based measures can have some impact on these behaviors, it is sobering to note that increasing obesity levels have not been reversed in any population. Tobacco avoidance may be the most important and most difficult behavioral modification of all. In the twentieth century, 100 million people worldwide died of tobacco-related diseases; it is projected that more than 1 billion people will die of these diseases in the twenty-first century, with the vast majority of those deaths in developing countries. The WHO’s 2003 Framework Convention on Tobacco Control represented a major advance, committing all of its signatories to a set of policy measures shown to reduce tobacco consumption. Today, ~80% of the world’s 1 billion smokers live in low- and middle-income countries. If trends continue, tobacco-related deaths will increase to 8 million per year by 2030, with 80% of those deaths in low- and middle-income countries.

MENTAL HEALTH

The WHO reports that some 450 million people worldwide are affected by mental, neurologic, or behavioral problems at any given time and that ~877,000 people die by suicide every year. Major depression is the leading cause of years lost to disability in the world today. One in four patients visiting a health service has at least one mental, neurologic, or behavioral disorder, but most of these disorders are neither diagnosed nor treated. Most low- and middle-income countries devote <1% of their health expenditures to mental health.

Increasingly effective therapies exist for many of the major causes of mental disorders. Effective treatments for many neurologic diseases, including seizure disorders, have long been available. One of the greatest barriers to delivery of such therapies is the paucity of skilled personnel. Most sub-Saharan African countries have only a handful of psychiatrists, for example; most of them practice in cities and are unavailable within the public sector or to patients living in poverty.

Among the few patients who are fortunate enough to see a psychiatrist or neurologist, fewer still are able to adhere to treatment regimens: several surveys of already diagnosed patients ostensibly receiving daily therapy have revealed that, among the poor, multiple barriers prevent patients from taking their medications as prescribed. In one study from Kenya, no patients being seen in an epilepsy clinic had therapeutic blood levels of anti-seizure medications, even though all had had these drugs prescribed. Moreover, many patients had no detectable blood levels of these agents. The same barriers that prevent the poor from having reliable access to insulin or ART prevent them from benefiting from antidepressant, antipsychotic, and antiepileptic agents. To alleviate this problem, some authorities are proposing the training of health workers to provide community-based adherence support, counseling services, and referrals for patients in need of mental health services. One such program instituted in Goa, India, used “lay” counselors and resulted in a significant reduction in symptoms of common mental disorders among the target population.

World Mental Health: Problems and Priorities in Low-Income Countries still offers a comprehensive analysis of the burden of mental, behavioral, and social problems in low-income countries and relates the mental health consequences of social forces such as violence, dislocation, poverty, and disenfranchisement of women to current economic, political, and environmental concerns. In the years since this report was published, however, a number of pilot projects designed to deliver community-based care to patients with chronic mental illness have been launched in settings as diverse as Goa, India; Banda Aceh, Indonesia; rural China; post-earthquake Haiti; and Fiji. Some of these programs have been school-based and have sought to link prevention to care.

CONCLUSION: TOWARD A SCIENCE OF IMPLEMENTATION

Public health strategies draw largely on quantitative methods—epidemiology, biostatistics, and economics. Clinical practice, including the practice of internal medicine, draws on a rapidly expanding knowledge base but remains focused on individual patient care; clinical interventions are rarely population-based. But global health equity depends on avoiding the false debates of the past: neither public health nor clinical approaches alone are adequate to address the problems of global health. There is a long way to go before evidence-based internal medicine is applied effectively among the world’s poor. Complex infectious diseases such as HIV/AIDS and TB have proved difficult but not impossible to manage; drug resistance and lack of effective health systems have complicated such work. Beyond what is usually termed “communicable diseases”—i.e., in the arena of chronic diseases such as cardiovascular disease and mental illness—global health is a nascent endeavor. Efforts to address any one of these problems in settings of great scarcity need to be integrated into broader efforts to strengthen failing health systems and alleviate the growing personnel crisis within these systems.

Such efforts must include the building of “platforms” for care delivery that are robust enough to incorporate new preventive, diagnostic, and therapeutic technologies rapidly in response to changes both in the burden of disease and in the needs not met by dominant paradigms and systems of health delivery. Academic medical centers have tried to address this “know–do” gap as new technologies are introduced and assessed through clinical trials, but the reach of these institutions into settings of poverty is limited in rich and poor countries alike. When such centers link their capacities effectively to the public institutions charged with the delivery of health care to the poor, great progress can be made.

For these reasons, scholarly work and practice in the field once known as “international health” and now often designated “global health equity” are changing rapidly. That work is still informed by the tension between clinical practice and population-based interventions, between analysis and action, and between prevention and care. Once metrics are refined, how might they inform efforts to lessen premature morbidity and mortality rates among the world’s poor? As in the nineteenth century, human rights perspectives have proved helpful in turning attention to the problems of the destitute sick; such perspectives may also inform strategies for delivering care equitably.

A number of university hospitals are developing training programs for physicians with an interest in global health. In medical schools across the United States and in other wealthy countries, interest in global health has exploded. One study has shown that more than 25% of medical students take part in at least one global health experience prior to graduation. Half a century or even a decade ago, such high levels of interest would have been unimaginable.

An estimated 12 million people die each year simply because they live in poverty. An absolute majority of these premature deaths occur in Africa, with the poorer regions of Asia not far behind. Most of these deaths occur because the world’s poorest do not have access to the fruits of science. They include deaths from vaccine-preventable illness, deaths during childbirth, deaths from infectious diseases that might be cured with access to antibiotics and other essential medicines, deaths from malaria that would have been prevented by bed nets and access to therapy, and deaths from waterborne illnesses. Other excess mortality is attributable to the inadequacy of efforts to develop new preventive, diagnostic, and therapeutic tools. Those funding the discovery and development of new tools typically neglect the concurrent need for strategies to make them available to the poor. Indeed, some would argue that the biggest challenge facing those who seek to address this outcome gap is the lack of practical means of distribution in the most heavily affected regions.

The development of tools must be followed quickly by their equitable distribution. When new preventive and therapeutic tools are developed without concurrent attention to delivery or implementation, one encounters what are sometimes termed perverse effects: even as new tools are developed, inequalities of outcome—lower morbidity and mortality rates among those who can afford access, with sustained high morbidity and mortality among those who cannot—will grow in the absence of an equity plan to deliver the tools to those most at risk. Preventing such a future is the most important goal of global health.


3

Decision-Making in Clinical Medicine

Daniel B. Mark, John B. Wong


 

INTRODUCTION

To a medical student who requires hours to collect a patient’s history, perform a physical examination, and organize that information into a coherent presentation, an experienced clinician’s ability to decide on a diagnosis and management plan in minutes may seem extraordinary. What separates the master clinician from the novice is an elusive quality called “expertise.” The first part of this chapter provides an overview of our current understanding of expertise in clinical reasoning, what it is, and how it can be developed.

The proper use of diagnostic tests and the integration of the results into the patient’s clinical assessment may also be equally bewildering to students. Hoping to hit the unknown diagnostic target, novice medical practitioners typically apply a “shotgun” approach to testing. The expert, in contrast, usually focuses her testing strategy to specific diagnostic hypotheses. The second part of the chapter reviews basic statistical concepts useful for interpreting diagnostic tests and quantitative tools useful for clinical decision-making.

Evidence-based medicine (EBM) constitutes the integration of the best available research evidence with clinical judgment as applied to the care of individual patients. The third part of the chapter provides an overview of the tools of EBM.

BRIEF INTRODUCTION TO CLINICAL REASONING

Clinical Expertise   Defining “clinical expertise” remains surprisingly difficult. Chess has an objective ranking system based on skill and performance criteria. Athletics, similarly, have ranking systems to distinguish novices from Olympians. But in medicine, after physicians complete training and pass the boards, no further tests or benchmarks identify those who have attained the highest levels of clinical performance. Of course, physicians often consult a few “elite” clinicians for their “special problem-solving prowess” when particularly difficult or obscure cases have baffled everyone else. Yet despite their skill, even master clinicians typically cannot explain their exact processes and methods, thereby limiting the acquisition and dissemination of the expertise used to achieve their impressive results. Furthermore, clinical virtuosity appears not to be generalizable, e.g., an expert on hypertrophic cardiomyopathy may be no better (and possibly worse) than a first-year medical resident at diagnosing and managing a patient with neutropenia, fever, and hypotension.

Broadly construed, clinical expertise includes not only cognitive dimensions and the integration of verbal and visual cues or information but also complex fine-motor skills necessary for invasive and noninvasive procedures and tests. In addition, “the complete package” of expertise in medicine includes the ability to communicate effectively with patients and work well with members of the medical team. Research on medical expertise remains relatively sparse overall, with most of the work focused on diagnostic reasoning, and much less work focused on treatment decisions or the technical skills involved in the performance of procedures. Thus, in this chapter, we focus primarily on the cognitive elements of clinical reasoning.

Because clinical reasoning takes place in the heads of doctors, it is therefore not readily observable, making it obviously difficult to study. One method of research on reasoning asks doctors to “think out loud” as they receive increments of clinical information in a manner meant to simulate a clinical encounter. Another research approach has focused on how doctors should reason diagnostically rather than on how they actually do reason. Much of what is known about clinical reasoning comes from empirical studies of nonmedical problem-solving behavior. Because of the diverse perspectives contributing to this area, with important contributions from cognitive psychology, sociology, medical education, economics, informatics, and decision sciences, no single integrated model of clinical reasoning exists, and not infrequently, different terms and models describe similar phenomena.

Intuitive versus Analytic Reasoning   A contemporary model of reasoning, dual-process theory distinguishes two general systems of cognitive processes. Intuition (System 1) provides rapid effortless judgments from memorized associations using pattern recognition and other simplifying “rules of thumb” (i.e., heuristics). For example, a very simple pattern that could be useful in certain situations is “African-American women plus hilar adenopathy equals sarcoid.” Because no effort is involved in recalling the pattern, typically, the clinician is unable to say how those judgments were formulated. In contrast, analysis (System 2), the other form of reasoning in the dual-process model, is slow, methodical, deliberative, and effortful. These are, of course, idealized extremes of the cognitive continuum. How these systems interact in different decision problems, how experts use them differently from novices, and when their use can lead to errors in judgment remain the subject of considerable study and debate.

Pattern recognition is a complex cognitive process that appears largely effortless. One can recognize people’s faces, the breed of a dog, or an automobile model without necessarily being able to say what specific features prompted the recognition. Analogously, experienced clinicians often recognize familiar diagnosis patterns quickly. In the absence of an extensive stored repertoire of diagnostic patterns, students (as well as more experienced clinicians operating outside their area of expertise) often use the more laborious System 2 analytic approach along with more intensive and comprehensive data collection to reach the diagnosis.

The following three brief scenarios of a patient with hemoptysis demonstrate three distinct patterns:

• A 46-year-old man presents to his internist with a chief complaint of hemoptysis. An otherwise healthy nonsmoker, he is recovering from an apparent viral bronchitis. This presentation pattern suggests that the small amount of blood-streaked sputum is due to acute bronchitis, so that a chest x-ray provides sufficient reassurance that a more serious disorder is absent.

• In the second scenario, a 46-year-old patient who has the same chief complaint but with a 100-pack-year smoking history, a productive morning cough, and episodes of blood-streaked sputum fits the pattern of carcinoma of the lung. Consequently, along with the chest x-ray, the physician obtains a sputum cytology examination and refers this patient for a chest computed tomography (CT) scan.

 •   In the third scenario, a 46-year-old patient with hemoptysis who immigrated from a developing country has an echocardiogram as well, because the physician hears a soft diastolic rumbling murmur at the apex on cardiac auscultation, suggesting rheumatic mitral stenosis and possibly pulmonary hypertension.

Although rapid, pattern recognition used without sufficient reflection can result in premature closure: mistakenly concluding that one already knows the correct diagnosis and therefore failing to complete the data collection that would demonstrate the lack of fit of the initial pattern selected. For example, a 45-year-old man presents with a 3-week history of a “flulike” upper respiratory infection (URI) including symptoms of dyspnea and a productive cough. On the basis of the presenting complaints, the clinician uses a “URI assessment form” to improve the quality and efficiency of care by standardizing the information gathered. After quickly acquiring the requisite structured examination components and noting in particular the absence of fever and a clear chest examination, the physician prescribes medication for acute bronchitis and sends the patient home with the reassurance that his illness was not serious. Following a sleepless night with significant dyspnea, the patient develops nausea and vomiting and collapses. He presents to the emergency department in cardiac arrest and is unable to be resuscitated. His autopsy shows a posterior wall myocardial infarction and a fresh thrombus in an atherosclerotic right coronary artery. What went wrong? The clinician had decided, based on the patient’s appearance, even before starting the history, that the patient’s complaints were not serious. Therefore, he felt confident that he could perform an abbreviated and focused examination by using the URI assessment protocol rather than considering the broader range of possibilities and performing appropriate tests to confirm or refute his initial hypotheses. In particular, by concentrating on the URI, the clinician failed to elicit the full dyspnea history, which would have suggested a far more serious disorder, and he neglected to search for other symptoms that could have directed him to the correct diagnosis.

Heuristics, also referred to as cognitive shortcuts or rules of thumb, are simplifying decision strategies that ignore part of the data available so as to provide an efficient path to the desired judgment. They are generally part of the intuitive system tools. Two major research programs have come to different conclusions about the value of heuristics in clinical judgment. The “heuristics and biases” program focused on understanding how heuristics in problem solving could be biased by testing the numerical intuition of psychology undergraduates against the rules of statistics. In contrast, the “fast and frugal heuristics” research program explored how and when decision makers’ reliance on simple heuristics can produce good decisions. Although many heuristics have relevance to clinical reasoning, only four will be mentioned here.

When assessing a particular patient, clinicians often weigh the similarity of that patient’s symptoms, signs, and risk factors against those of their mental representations of the diagnostic hypotheses being considered. In other words, among the diagnostic possibilities, clinicians identify the diagnosis for which the patient appears to be a representative example. Analogous to pattern recognition, this cognitive shortcut is called the representativeness heuristic. However, physicians using the representativeness heuristic can reach erroneous conclusions if they fail to consider the underlying prevalence (i.e., the prior, or pretest, probabilities) of the two competing diagnoses that could explain the patient’s symptoms. Consider a patient with hypertension and headache, palpitations, and diaphoresis. Inexperienced clinicians might judge pheochromocytoma to be quite likely based on the representativeness heuristic with this classic symptom triad suggesting pheochromocytoma. Doing so would be incorrect given that other causes of hypertension are much more common than pheochromocytoma, and this triad of symptoms can occur in patients who do not have pheochromocytoma. Less experience with a particular diagnosis and with the breadth of presentations (e.g., diseases that affect multiple organ systems such as sarcoid) may also lead to errors.

A second commonly used cognitive shortcut, the availability heuristic, involves judgments based of how easily prior similar cases or outcomes can be brought to mind. For example, an experienced clinician may recall 20 elderly patients seen over the last few years who presented with painless dyspnea of acute onset and were found to have acute myocardial infarction (MI). A novice clinician may spend valuable time seeking a pulmonary cause for the symptoms before considering and then confirming the cardiac diagnosis. In this situation, the patient’s clinical pattern does not fit the most common pattern of acute MI, but experience with this atypical presentation, along with the ability to recall it, directs the physician to the diagnosis.

Errors with the availability heuristic arise from several sources of recall bias. Rare catastrophes are likely to be remembered with a clarity and force disproportionate to their likelihood for future diagnosis—for example, a patient with a sore throat eventually found to have leukemia or a young athlete with leg pain eventually found to have a sarcoma—and those publicized in the media or that are recent experiences are, of course, easier to recall and therefore more influential on clinical judgments.

The third commonly used cognitive shortcut, the anchoring heuristic (also called conservatism or stickiness), involves estimating a probability of disease (the anchor) and then insufficiently adjusting that probability up or down (compared with Bayes’ rule) when interpreting new data about the patient, i.e., sticking to their initial diagnosis. For example, a clinician may still judge the probability of coronary artery disease (CAD) to be high after a negative exercise thallium test and proceed to cardiac catheterization (see “Measures of Disease Probability and Bayes’ Rule,” below).

The fourth heuristic states that clinicians should use the simplest explanation possible that will account adequately for the patient’s symptoms or findings (Occam’s razor or, alternatively, the simplicity heuristic). Although this is an attractive and often used principle, it is important to remember that no biologic basis for it exists. Errors from the simplicity heuristic include premature closure leading to the neglect of unexplained significant symptoms or findings.

Even experienced physicians use analytic reasoning processes (System 2) when the problem they face is recognized to be complex or to involve important unfamiliar elements or features. In such situations, clinicians proceed much more methodically in what has been referred to as the hypothetico-deductive model of reasoning. From the outset, expert clinicians working analytically generate, refine, and discard diagnostic hypotheses. The hypotheses drive questions asked during history taking and may change based on the working hypotheses of the moment. Even the physical examination is focused by the working hypotheses. Is the spleen enlarged? How big is the liver? Is it tender? Are there any palpable masses or nodules? Each question must be answered (with the exclusion of all other inputs) before the examiner can move on to the next specific question. Each diagnostic hypothesis provides testable predictions and sets a context for the next question or step to follow. For example, if the enlarged and quite tender liver felt on physical examination is due to acute hepatitis (the hypothesis), certain specific liver function tests should be markedly elevated (the prediction). If the tests come back normal, the hypothesis may have to be discarded or substantially modified.

Negative findings often are neglected but are as important as positive ones because they often reduce the likelihood of the diagnostic hypotheses under consideration. Chest discomfort that is not provoked or worsened by exertion in an active patient reduces the likelihood that chronic ischemic heart disease is the underlying cause. The absence of a resting tachycardia and thyroid gland enlargement reduces the likelihood of hyperthyroidism in a patient with paroxysmal atrial fibrillation.

The acuity of a patient’s illness may override considerations of prevalence and the other issues described above. “Diagnostic imperatives” recognize the significance of relatively rare but potentially catastrophic diagnoses if undiagnosed and untreated. For example, clinicians are taught to consider aortic dissection routinely as a possible cause of acute severe chest discomfort. Even though the typical history of dissection differs from that of MI, dissection is far less prevalent, so diagnosing dissection remains challenging unless it is explicitly and routinely considered as a diagnostic imperative (Chap. 301). If the clinician fails to elicit any of the characteristic features of dissection by history and finds equivalent blood pressures in both arms and no pulse deficits, he may feel comfortable discarding the aortic dissection hypothesis. If, however, the chest x-ray shows a possible widened mediastinum, the hypothesis may be reinstated and an appropriate imaging test ordered (e.g., thoracic CT scan, transesophageal echocardiogram) to evaluate more fully. In nonacute situations, the prevalence of potential alternative diagnoses should play a much more prominent role in diagnostic hypothesis generation.

Cognitive scientists studying the thought processes of expert clinicians have observed that clinicians group data into packets, or “chunks,” that are stored in short-term or “working memory” and manipulated to generate diagnostic hypotheses. Because short-term memory can typically retain only 5–9 items at a time, the number of packets that can be actively integrated into hypothesis-generating activities is similarly limited. For this reason, the cognitive shortcuts discussed above play a key role in the generation of diagnostic hypotheses, many of which are discarded as rapidly as they are formed (thereby demonstrating that the distinction between analytic and intuitive reasoning is an arbitrary and simplistic, but nonetheless useful, representation of cognition).

Research into the hypothetico-deductive model of reasoning has had surprising difficulty identifying the elements of the reasoning process that distinguish experts from novices. This has led to a shift from examining the problem-solving process of experts to analyzing the organization of their knowledge. For example, diagnosis may be based on the resemblance of a new case to prior individual instances (exemplars). Experts have a much larger store of memorized cases, for example, visual long-term memory in radiology. However, clinicians do not simply rely on literal recall of specific cases but have constructed elaborate conceptual networks of memorized information or models of disease to aid in arriving at their conclusions. That is, expertise involves an increased ability to connect symptoms, signs, and risk factors to one another in meaningful ways; relate those findings to possible diagnoses; and identify the additional information necessary to confirm the diagnosis.

No single theory accounts for all the key features of expertise in medical diagnosis. Experts have more knowledge about more things and a larger repertoire of cognitive tools to employ in problem solving than do novices. One definition of expertise highlights the ability to make powerful distinctions. In this sense, expertise involves a working knowledge of the diagnostic possibilities and what features distinguish one disease from another. Memorization alone is insufficient. Memorizing a medical textbook would not make one an expert. But having access to detailed and specific relevant information is critically important. Clinicians of the past primarily accessed their own remembered experience. Clinicians of the future will be able to access the experience of large numbers of clinicians using electronic tools, but, as with the memorized textbook, the data alone will not create an instant expert. The expert adds these data to an extensive internalized database of knowledge and experience not available to the novice (and nonexpert).

Despite all the work that has been done to understand expertise, in medicine and other disciplines, it remains uncertain whether there is any didactic program that can accelerate the progression from novice to expert or from experienced clinician to master clinician. Deliberate effortful practice (over an extended period of time, sometimes said to be 10 years or 10,000 practice hours) and personal coaching are two strategies that are often used outside medicine (e.g., music, athletics, chess) to promote expertise. Their use in developing medical expertise and maintaining or enhancing it has not yet been adequately explored.

DIAGNOSTIC VERSUS THERAPEUTIC DECISION MAKING

The modern ideal of medical therapeutic decision making is to “personalize” the recommendation. In the abstract, personalizing treatment involves combining the best available evidence about what works with an individual patient’s unique features (e.g., risk factors) and his or her preferences and health goals to craft an optimal treatment recommendation with the patient. Operationally, there are two different and complementary levels of personalization possible: individualizing the evidence for the specific patient based on relevant clinical and other characteristics, and personalizing the patient interaction by incorporating their values, often referred to as shared decision-making, which is critically important, but falls outside the scope of this chapter.

Individualizing the evidence about therapy does not mean relying on physician impressions of what works based on personal experience. Because of small sample sizes and rare events, the chance of drawing erroneous causal inferences from one’s own clinical experience is very high. For most chronic diseases, therapeutic effectiveness is only demonstrable statistically in patient populations. It would be incorrect to infer with any certainty, for example, that treating a hypertensive patient with angiotensin-converting enzyme (ACE) inhibitors necessarily prevented a stroke from occurring during treatment, or that an untreated patient would definitely have avoided a stroke had he or she been treated. For many chronic diseases, a majority of patients will remain event free regardless of treatment choices; some will have events regardless of which treatment is selected; and those who avoided having an event through treatment cannot be individually identified. Blood pressure lowering, a readily observable surrogate endpoint, does not have a tightly coupled relationship with strokes prevented. Consequently, demonstrating therapeutic effectiveness cannot rely simply on observing the outcome of an individual patient but should instead be based on large groups of patients carefully studied and properly analyzed.

Therapeutic decision making, therefore, should be based on the best available evidence from clinical trials and well-done outcome studies. Authoritative, well-done clinical practice guidelines that synthesize such evidence offer readily available, reliable, and trustworthy information relevant to many treatment decisions clinicians face. However, all guidelines recognize that their “one size fits all” recommendations may not apply to individual patients. Increased attention is now being paid to understand how best to adjust group-level clinical evidence of treatment harms and benefits to account for the absolute level of risks faced by subgroups and even individual patients, using, for example, validated clinical risk scores.

NONCLINICAL INFLUENCES ON CLINICAL DECISION-MAKING

More than a decade of research on variations in clinician practice patterns has shed much light on the forces that shape clinical decisions. These factors can be grouped conceptually into three overlapping categories: (1) factors related to physicians’ personal characteristics and practice style, (2) factors related to the practice setting, and (3) factors related to economic incentives.

Factors Related to Practice Style   To ensure that necessary care is provided at a high level of quality, physicians fulfill a key role in medical care by serving as the patient’s agent. Factors that influence performance in this role include the physician’s knowledge, training, and experience. Clearly, physicians cannot practice EBM (described later in the chapter) if they are unfamiliar with the evidence. As would be expected, specialists generally know the evidence in their field better than do generalists. Beyond published evidence and practice guidelines, a major set of influences on physician practice can be subsumed under the general concept of “practice style.” The practice style serves to define norms of clinical behavior. Beliefs about effectiveness of different therapies and preferred patterns of diagnostic test use are examples of different facets of a practice style. The physician beliefs that drive these different practice styles may be based on personal experience, recollection, and interpretation of the available medical evidence. For example, heart failure specialists are much more likely than generalists to achieve target doses of ACE inhibitor therapy in their heart failure patients because they are more familiar with what the targets are (as defined by large clinical trials), have more familiarity with the specific drugs (including adverse effects), and are less likely to overreact to foreseeable problems in therapy such as a rise in creatinine levels or asymptomatic hypotension.

Beyond the patient’s welfare, physician perceptions about the risk of a malpractice suit resulting from either an erroneous decision or a bad outcome may drive clinical decisions and create a practice referred to as defensive medicine. This practice involves using tests and therapies with very small marginal benefit, ostensibly to preclude future criticism should an adverse outcome occur. Without any conscious awareness of a connection to the risk of litigation, however, over time such patterns of care may become accepted as part of the practice norm, thereby perpetuating their overuse, e.g., annual cardiac exercise testing in asymptomatic patients.

Practice Setting Factors   Factors in this category relate to the physical resources available to the physician’s practice and the practice environment. Physician-induced demand is a term that refers to the repeated observation that once medical facilities and technologies are made available to physicians, they will use them. Other environmental factors that can influence decision-making include the local availability of specialists for consultations and procedures; “high-tech” advanced imaging or procedure facilities such as MRI machines and proton beam therapy centers; and fragmentation of care.

Economic Incentives   Economic incentives are closely related to the other two categories of practice-modifying factors. Financial issues can exert both stimulatory and inhibitory influences on clinical practice. In general, physicians are paid on a fee-for-service, capitation, or salary basis. In fee-for-service, physicians who do more get paid more, thereby encouraging overuse, consciously or unconsciously. When fees are reduced (discounted reimbursement), doctors tend to increase the number of services provided to maintain revenue. Capitation, in contrast, provides a fixed payment per patient per year to encourage physicians to consider a global population budget in managing individual patients and ideally reducing the use of interventions with small marginal benefit. In contrast to inexpensive preventive services, however, this type of incentive is more likely to affect expensive interventions. To discourage volume-based excessive utilization, fixed salary compensation plans pay physicians the same regardless of the clinical effort expended, but may provide an incentive to see fewer patients.

INTERPRETATION OF DIAGNOSTIC TESTS IN THE CONTEXT OF DECISION-MAKING

Despite the great technological advances in medicine over the last century, uncertainty remains a key challenge in all aspects of medical decision-making. Compounding this challenge is the massive information overload that characterizes modern medicine. Today’s clinician needs access to close to 2 million pieces of information to practice medicine. According to one estimate, doctors subscribe to an average of seven journals, representing over 2500 new articles each year. Of course, to be useful, this information must be sifted for applicability to and then integrated with patient-specific data. Although computers appear to offer an obvious solution both for information management and for quantification of medical care uncertainties, many practical problems must be solved before computerized decision support can be routinely incorporated into the clinical reasoning process in a way that demonstrably improves the quality of care. For the present, understanding the nature of diagnostic test information can help clinicians become more efficient users of such data. The next section reviews important concepts related to diagnostic testing.

DIAGNOSTIC TESTING: MEASURES OF TEST ACCURACY

The purpose of performing a test on a patient is to reduce uncertainty about the patient’s diagnosis or prognosis in order to facilitate optimal management. Although diagnostic tests commonly are thought of as laboratory tests (e.g., blood count) or procedures (e.g., colonoscopy or bronchoscopy), any technology that changes a physician’s understanding of the patient’s problem qualifies as a diagnostic test. Thus, even the history and physical examination can be considered a form of diagnostic test. In clinical medicine, it is common to reduce the results of a test to a dichotomous outcome, such as positive or negative, normal or abnormal. Although this simplification ignores useful information (such as the degree of abnormality), such simplification does make it easier to demonstrate the fundamental principles of test interpretation discussed below.

The accuracy of diagnostic tests is defined in relation to an accepted “gold standard,” which defines the presumably true state of the patient (Table 3-1). Characterizing the diagnostic performance of a new test requires identifying an appropriate population (ideally, patients in whom the new test would be used) and applying both the new and the gold standard tests to all subjects. Biased estimates of test performance may occur from using an inappropriate population or from incompletely applying the gold standard test. By comparing the two tests, the characteristics of the new test are determined. The sensitivity or true-positive rate of the new test is the proportion of patients with disease (defined by the gold standard) who have a positive (new) test. This measure reflects how well the new test identifies patients with disease. The proportion of patients with disease who have a negative test is the false-negative rate and is calculated as 1 – sensitivity. Among patients without disease, the proportion who have a negative test is the specificity, or true-negative rate. This measure reflects how well the new test correctly identifies patients without disease. Among patients without disease, the proportion who have a positive test is the false-positive rate, calculated as 1 – specificity. A perfect test would have a sensitivity of 100% and a specificity of 100% and would completely distinguish patients with disease from those without it.

TABLE 3-1

MEASURES OF DIAGNOSTIC TEST ACCURACY

image

Calculating sensitivity and specificity requires selection of a threshold value or cut point above which the test is considered “positive.” Making the cut point “stricter” (e.g., raising it) lowers sensitivity but improves specificity, whereas making it “laxer” (e.g., lowering it) raises sensitivity but lowers specificity. This dynamic trade-off between more accurate identification of subjects with disease versus those without disease is often displayed graphically as a receiver operating characteristic (ROC) curve (Fig. 3-1) by plotting sensitivity (y axis) versus 1 – specificity (x axis). Each point on the curve represents a potential cut point with an associated sensitivity and specificity value. The area under the ROC curve often is used as a quantitative measure of the information content of a test. Values range from 0.5 (no diagnostic information from testing at all; the test is equivalent to flipping a coin) to 1.0 (perfect test). The choice of cut point should depend on the relative harms and benefits of treatment for those without versus those with disease. For example, if treatment was safe with substantial benefit, then choosing a high-sensitivity cut point (upper right of the ROC curve) for a low-risk test may be appropriate (e.g., phenylketonuria in newborns), but if treatment had substantial risk for harm, then choosing a high-specificity cut point (lower left of the ROC curve) may be appropriate (e.g., amniocentesis that may lead to therapeutic abortion of a normal fetus). The choice of cut point may also depend on the likelihood of disease, with low likelihoods placing a greater emphasis on the harms of treating false-positive tests and higher likelihoods placing a greater emphasis on missed benefit by not treating false-negative tests.

image

FIGURE 3-1   Each receiver operating characteristic (ROC) curve illustrates a trade-off that occurs between improved test sensitivity (accurate detection of patients with disease) and improved test specificity (accurate detection of patients without disease), because the test value defining when the test turns from “negative” to “positive” is varied. A 45° line would indicate a test with no predictive value (sensitivity = specificity at every test value). The area under each ROC curve is a measure of the information content of the test. Thus, a larger ROC area signifies increased diagnostic accuracy.

MEASURES OF DISEASE PROBABILITY AND BAYES’ RULE

Unfortunately, there are no perfect tests. After every test is completed, the true disease state of the patient remains uncertain. Quantifying this residual uncertainty can be done with Bayes’ rule, which provides a simple way to calculate the likelihood of disease after a test result or posttest probability from three parameters: the pretest probability of disease, the test sensitivity, and the test specificity. The pretest probability is a quantitative estimate of the likelihood of the diagnosis before the test is performed and is usually the prevalence of the disease in the underlying population although occasionally it can be the disease incidence. For some common conditions, such as CAD, nomograms and statistical models generate estimates of pretest probability that account for history, physical examination, and test findings. The posttest probability (also called the predictive value of the test) is a revised statement of the likelihood of the diagnosis, accounting for both pretest probability and test results. For the likelihood of disease following a positive test (i.e., positive predictive value), Bayes’ rule is calculated as:

image

For example, with a pretest probability of 0.50 and a “positive” diagnostic test result (test sensitivity = 0.90 and specificity = 0.90):

image

The term predictive value often is used as a synonym for the posttest probability. Unfortunately, clinicians commonly misinterpret reported predictive values as intrinsic measures of test accuracy. Studies of diagnostic tests compound the confusion by calculating predictive values on the same sample used to measure sensitivity and specificity. Since all posttest probabilities are a function of the prevalence of disease in the tested population, such calculations may be misleading unless the test is applied subsequently to populations with the same disease prevalence. For these reasons, the term predictive value is best avoided in favor of the more informative posttest probability following a positive or a negative test result.

The nomogram version of Bayes’ rule (Fig. 3-2) helps us to conceptually understand how it estimates the posttest probability of disease. In this nomogram, the impact of the diagnostic test result is summarized by the likelihood ratio, which is defined as the ratio of the probability of a given test result (e.g., “positive” or “negative”) in a patient with disease to the probability of that result in a patient without disease, thereby providing a measure of how well the test distinguishes those with from those without disease.

image

FIGURE 3-2   Nomogram version of Bayes’ rule used to predict the posttest probability of disease (right-hand scale) using the pretest probability of disease (left-hand scale) and the likelihood ratio for a positive test (middle scale). See text for information on calculation of likelihood ratios. To use, place a straight edge connecting the pretest probability and the likelihood ratio and read off the posttest probability. The right-hand part of the figure illustrates the value of a positive exercise treadmill test (likelihood ratio 4, green line) and a positive exercise thallium single-photon emission computed tomography perfusion study (likelihood ratio 9, broken yellow line) in a patient with a pretest probability of coronary artery disease of 50%. (Adapted from Centre for Evidence-Based Medicine: Likelihood ratios. Available at http://www.cebm.net/index.aspx?o=1043.)

For a positive test, the likelihood ratio positive is calculated as the ratio of the true-positive rate to the false-positive rate (or sensitivity/[1 – specificity]). For example, a test with a sensitivity of 0.90 and a specificity of 0.90 has a likelihood ratio of 0.90/(1 – 0.90), or 9. Thus, for this hypothetical test, a “positive” result is nine times more likely in a patient with the disease than in a patient without it. Most tests in medicine have likelihood ratios for a positive result between 1.5 and 20. Higher values are associated with tests that more substantially increase the posttest likelihood of disease. A very high likelihood ratio positive (exceeding 10) usually implies high specificity, so a positive high-specificity test helps “rule in” disease. If sensitivity is excellent but specificity is less so, the likelihood ratio will be reduced substantially (e.g., with a 90% sensitivity but a 55% specificity, the likelihood ratio is 2.0).

For a negative test, the corresponding likelihood ratio negative is the ratio of the false-negative rate to the true-negative rate (or [1 – sensitivity]/specificity). Lower likelihood ratio values more substantially lower the posttest likelihood of disease. A very low likelihood ratio negative (falling below 0.10) usually implies high sensitivity, so a negative high-sensitivity test helps “rule out” disease. The hypothetical test considered above with a sensitivity of 0.9 and a specificity of 0.9 would have a likelihood ratio for a negative test result of (1 – 0.9)/0.9, or 0.11, meaning that a negative result is about one-tenth as likely in patients with disease than in those without disease (or 10 times more likely in those without disease than in those with disease).

APPLICATIONS TO DIAGNOSTIC TESTING IN CAD

Consider two tests commonly used in the diagnosis of CAD: an exercise treadmill test and an exercise single-photon emission CT (SPECT) myocardial perfusion imaging test (Chap. 270e). Meta-analysis has shown that a positive treadmill ST-segment response has an average sensitivity of 66% and an average specificity of 84%, yielding a likelihood ratio of 4.1 (0.66/[1 – 0.84]) (consistent with small discriminatory ability because it falls between 2 and 5). For a patient with a 10% pretest probability of CAD, the posttest probability of disease after a positive result rises to only about 30%. If a patient with a pretest probability of CAD of 80% has a positive test result, the posttest probability of disease is about 95%.

In contrast, exercise SPECT myocardial perfusion test is more accurate for CAD. For simplicity, assume that the finding of a reversible exercise-induced perfusion defect has both a sensitivity and a specificity of 90%, yielding a likelihood ratio for a positive test of 9.0 (0.90/[1 – 0.90]) (consistent with moderate discriminatory ability because it falls between 5 and 10). For the same 10% pretest probability patient, a positive test raises the probability of CAD to 50% (Fig. 3-2). However, despite the differences in posttest probabilities between these two tests (30% versus 50%), the more accurate test may not improve diagnostic likelihood enough to change patient management (e.g., decision to refer to cardiac catheterization) because the more accurate test has only moved the physician from being fairly certain that the patient did not have CAD to a 50:50 chance of disease. In a patient with a pretest probability of 80%, exercise SPECT test raises the posttest probability to 97% (compared with 95% for the exercise treadmill). Again, the more accurate test does not provide enough improvement in posttest confidence to alter management, and neither test has improved much on what was known from clinical data alone.

In general, positive results with an accurate test (e.g., likelihood ratio positive 10) when the pretest probability is low (e.g., 20%) do not move the posttest probability to a range high enough to rule in disease (e.g., 80%). In screening situations, pretest probabilities are often particularly low because patients are asymptomatic. In such cases, specificity becomes particularly important. For example, in screening first-time female blood donors without risk factors for HIV, a positive test raised the likelihood of HIV to only 67% despite a specificity of 99.995% because the prevalence was 0.01%. Conversely, with a high pretest probability, a negative test may not rule out disease adequately if it is not sufficiently sensitive. Thus, the largest change in diagnostic likelihood following a test result occurs when the clinician is most uncertain (i.e., pretest probability between 30% and 70%). For example, if a patient has a pretest probability for CAD of 50%, a positive exercise treadmill test will move the posttest probability to 80% and a positive exercise SPECT perfusion test will move it to 90% (Fig. 3-2).

As presented above, Bayes’ rule employs a number of important simplifications that should be considered. First, few tests have only positive or negative results, and many tests provide multiple outcomes (e.g., ST-segment depression and exercise duration with exercise testing). Although Bayes’ rule can be adapted to this more detailed test result format, it is computationally more complex to do so. Similarly, when multiple tests are performed, the posttest probability may be used as the pretest probability to interpret the second test. However, this simplification assumes conditional independence—that is, that the results of the first test do not affect the likelihood of the second test result—and this is often not true.

Finally, it has long been asserted that sensitivity and specificity are prevalence-independent parameters of test accuracy, and many texts still make this statement. This statistically useful assumption, however, is clinically simplistic. A treadmill exercise test, for example, has a sensitivity in a population of patients with one-vessel CAD of around 30%, whereas its sensitivity in patients with severe three-vessel CAD approaches 80%. Thus, the best estimate of sensitivity to use in a particular decision may vary, depending on the severity of disease in the local population. A hospitalized, symptomatic, or referral population typically has a higher prevalence of disease and, in particular, a higher prevalence of more advanced disease than does an outpatient population. Consequently, test sensitivity will likely be higher in hospitalized patients, and test specificity higher in outpatients.

STATISTICAL PREDICTION MODELS

Bayes’ rule, while illustrative as presented above, provides an unrealistically simple solution to most problems a clinician faces. Predictions based on multivariable statistical models, however, can more accurately address these more complex problems by accounting for specific patient characteristics. In particular, these models explicitly account for multiple possibly overlapping pieces of patient-specific information and assign a relative weight to each on the basis of its unique contribution to the prediction in question. For example, a logistic regression model to predict the probability of CAD considers all the relevant independent factors from the clinical examination and diagnostic testing and their significance instead of the limited data that clinicians can manage in their heads or with Bayes’ rule. However, despite this strength, prediction models are usually too complex computationally to use without a calculator or computer (although this limitation may be overcome once medicine is practiced from a fully computerized platform).

To date, only a handful of prediction models have been validated properly (for example, Wells criteria for pulmonary embolism) (Table 3-2). The importance of independent validation in a population separate from the one used to develop the model cannot be overstated. An unvalidated prediction model should be viewed with the skepticism appropriate for any new drug or medical device that has not had rigorous clinical trial testing.

TABLE 3-2

WELLS CLINICAL PREDICTION RULE FOR PULMONARY EMBOLISM

image

When statistical models have been compared directly with expert clinicians, they have been found to be more consistent, as would be expected, but not significantly more accurate. Their biggest promise, then, may be in helping less-experienced clinicians identify critical discriminating patient characteristics and become more accurate in their predictions.

FORMAL DECISION SUPPORT TOOLS

DECISION SUPPORT SYSTEMS

Over the last 40 years, many attempts have been made to develop computer systems to aid clinical decision-making and patient management. Conceptually attractive because computers offer ready access to the vast information available to today’s physicians, they may also support management decisions by making accurate predictions of outcome, simulating the whole decision process, or providing algorithmic guidance. Computer-based predictions using Bayesian or statistical regression models inform a clinical decision but do not actually reach a “conclusion” or “recommendation.” Artificial intelligence systems attempt to simulate or replace human reasoning with a computer-based analogue. To date, such approaches have achieved only limited success. Reminder or protocol-directed systems do not make predictions but use existing algorithms, such as guidelines, to guide clinical practice. In general, however, decision support systems have had little impact on practice. Reminder systems, although not yet in widespread use, have shown the most promise, particularly in correcting drug dosing and promoting adherence to guidelines. Checklists, as used by pilots for example, have garnered recent support as an approach to avoid or reduce errors.

DECISION ANALYSIS

Compared with the decision support methods discussed above, decision analysis represents a prescriptive approach to decision making in the face of uncertainty. Its principal application is in complex decisions that involve substantial risk, abundant uncertainty, trade-offs in the outcomes emphasizing a role for preferences, or absence of evidence due to an idiosyncratic feature. For a public health example, Fig. 3-3 displays a decision tree to evaluate strategies for screening for HIV infection. Infected individuals who are unaware of their illness cause up to 20,000 new cases of HIV infection annually in the United States, and about 40% of HIV-positive patients progress to AIDS within a year of the initial diagnosis because of delayed diagnosis. Early identification offers the opportunity to prevent progression to AIDS through CD4 count and viral load monitoring and combination antiretroviral therapy and to reduce spread by reducing risky injection or sexual behaviors.

image

FIGURE 3-3   Basic structure of decision model used to evaluate strategies for screening for HIV in the general population. HAART, highly active antiretroviral therapy. (Provided courtesy of G. Sanders, with permission.)

In 2003, the Centers for Disease Control and Prevention (CDC) proposed that routine universal HIV testing should be incorporated into standard adult medical care and, in part, cited a decision analysis model comparing HIV screening with usual care. Assuming a 1% prevalence of unidentified HIV infection in the population, routine screening of a cohort of 43-year-old men and women increased life expectancy by 5.5 days and lifetime costs by $194 per person screened, yielding an incremental cost-effectiveness ratio for screening versus usual care of $15,078 per quality-adjusted life-year (the additional cost to society to increase population health by 1 year of perfect health). Factors that influenced the results included assumptions about the effectiveness of behavior modification on subsequent sexual behavior, the benefits of early therapy for HIV infection, and the prevalence and incidence of HIV infection in the population targeted. This model, which required over 75 separate data points, provided novel insights into a public health problem in the absence of a randomized clinical trial and helped weigh the pros and cons of such a health policy recommendation. Although such models have been developed for selected clinical problems, their benefit and application to individual real-time clinical management have yet to be demonstrated.

DIAGNOSIS AS AN ELEMENT OF QUALITY OF CARE

High-quality medical care begins with accurate diagnosis. Recently, diagnostic errors have been re-envisioned: the old view was that they were caused by a lack of sufficient skill of an individual clinician; the new view is that they represent a quality of care patient-safety problem traceable to breakdowns in the health care system. Whether this conceptual shift will lead to new ways to improve diagnosis is uncertain. An annual rate of diagnostic errors of 10–15%, possibly leading to 40,000 deaths in the United States, is commonly cited, but these figures are imprecise.

Solutions to the “diagnostic errors as a system of care problem” have focused on system-level approaches, such as decision support and other tools integrated into electronic medical records. The use of checklists has been proposed as a means of reducing some of the cognitive errors discussed earlier in the chapter, such as premature closure. Although checklists have been shown to be useful in certain medical contexts, such as operating rooms and intensive care units, their value in preventing diagnostic errors that lead to patient adverse events remains to be shown.

EVIDENCE-BASED MEDICINE

Clinical medicine is defined traditionally as a practice combining medical knowledge (including scientific evidence), intuition, and judgment in the care of patients (Chap. 1). EBM updates this construct by placing much greater emphasis on the processes by which clinicians gain knowledge of the most up-to-date and relevant clinical research to determine for themselves whether medical interventions alter the disease course and improve the length or quality of life. The meaning of practicing EBM becomes clearer through an examination of its four key steps:

1. Formulating the management question to be answered

2. Searching the literature and online databases for applicable research data

3. Appraising the evidence gathered with regard to its validity and relevance

4. Integrating this appraisal with knowledge about the unique aspects of the patient (including the patient’s preferences about the possible outcomes)

The process of searching the world’s research literature and appraising the quality and relevance of studies thus identified can be quite time-consuming and requires skills and training that most clinicians do not possess. Thus, identifying recent systematic overviews of the problem in question (Table 3-3) may offer the best starting point for most EBM searches.

TABLE 3-3

SELECTED TOOLS FOR FINDING THE EVIDENCE IN EVIDENCE-BASED MEDICINE (EBM)

image

Generally, the EBM tools listed in Table 3-3 provide access to research information in one of two forms. The first, primary research reports, is the original peer-reviewed research work that is published in medical journals and accessible through MEDLINE in abstract form. However, without training in using MEDLINE, quickly and efficiently locating reports that are on point in a huge sea of irrelevant or unhelpful citations may be difficult, and important studies could also be missed. The second form, systematic reviews, is the highest level of evidence in the hierarchy because it comprehensively summarizes the available evidence on a particular topic up to a certain date. To avoid the potential biases in review articles, predefined explicit search strategies and inclusion and exclusion criteria are used to find all of the relevant scientific research and grade its quality. The prototype for this kind of resource is the Cochrane Database of Systematic Reviews. When appropriate, a meta-analysis quantitatively summarizes the systematic review findings. The next two sections explicate the major types of clinical research reports available in the literature and the process of aggregating those data into meta-analyses.

SOURCES OF EVIDENCE: CLINICAL TRIALS AND REGISTRIES

The notion of learning from observation of patients is as old as medicine itself. Over the last 50 years, physicians’ understanding of how best to turn raw observation into useful evidence has evolved considerably. Case reports, personal anecdotal experience, and small single-center case series are now recognized as having severe limitations in validity and generalizability, and although they may generate hypotheses or be the first reports of adverse events, they have no role in formulating modern standards of practice. The major tools used to develop reliable evidence consist of the randomized clinical trial and the large observational registry. A registry or database typically is focused on a disease or syndrome (e.g., cancer, CAD, heart failure), a clinical procedure (e.g., bone marrow transplantation, coronary revascularization), or an administrative process (e.g., claims data used for billing and reimbursement).

By definition, in observational data, the investigator does not control patient care. Carefully collected prospective observational data, however, can achieve a level of evidence quality approaching that of major clinical trial data. At the other end of the spectrum, data collected retrospectively (e.g., chart review) are limited in form and content to what previous observers recorded, which may not include the specific research data being sought, e.g., claims data. Advantages of observational data include the inclusion of a broader population as encountered in practice than is typically represented in clinical trials because of their restrictive inclusion and exclusion criteria. In addition, observational data provide primary evidence for research questions when a randomized trial cannot be performed. For example, it would be difficult to randomize patients to test diagnostic or therapeutic strategies that are unproven but widely accepted in practice, and it would be unethical to randomize based on sex, racial/ethnic group, socioeconomic status, or country of residence or to randomize patients to a potentially harmful intervention, such as smoking or deliberately overeating to develop obesity.

A well-done prospective observational study of a particular management strategy differs from a well-done randomized clinical trial most importantly by its lack of protection from treatment selection bias. The use of observational data to compare diagnostic or therapeutic strategies assumes that sufficient uncertainty exists in clinical practice to ensure that similar patients will be managed differently by different physicians. In short, the analysis assumes that a sufficient element of randomness (in the sense of disorder rather than in the formal statistical sense) exists in clinical management. In such cases, statistical models attempt to adjust for important imbalances to “level the playing field” so that a fair comparison among treatment options can be made. When management is clearly not random (e.g., all eligible left main CAD patients are referred for coronary bypass surgery), the problem may be too confounded (biased) for statistical correction, and observational data may not provide reliable evidence.

In general, the use of concurrent controls is vastly preferable to that of historical controls. For example, comparison of current surgical management of left main CAD with left main CAD patients treated medically during the 1970s (the last time these patients were routinely treated with medicine alone) would be extremely misleading because “medical therapy” has substantially improved in the interim.

Randomized controlled clinical trials include the careful prospective design features of the best observational data studies but also include the use of random allocation of treatment. This design provides the best protection against measured and unmeasured confounding due to treatment selection bias (a major aspect of internal validity). However, the randomized trial may not have good external validity (generalizability) if the process of recruitment into the trial resulted in the exclusion of many patients seen in clinical practice.

Consumers of medical evidence need to be aware that randomized trials vary widely in their quality and applicability to practice. The process of designing such a trial often involves many compromises. For example, trials designed to gain U.S. Food and Drug Administration (FDA) approval for an investigational drug or device must fulfill regulatory requirements that may result in a trial population and design that differs substantially from what practicing clinicians would find most useful.

META-ANALYSIS

The Greek prefix meta signifies something at a later or higher stage of development. Meta-analysis is research that combines and summarizes the available evidence quantitatively. Although occasionally used to examine nonrandomized studies, meta-analysis is used most typically to summarize all randomized trials examining a particular therapy. Ideally, unpublished trials should be identified and included to avoid publication bias (i.e., missing “negative” trials that may not be published). Furthermore, the best meta-analyses obtain and analyze individual patient-level data from all trials rather than working only the summary data in published reports of each trial. Nonetheless, not all published meta-analyses yield reliable evidence for a particular problem, so their methodology should be scrutinized carefully to ensure proper study design and analysis. The results of a well-done meta-analysis are likely to be most persuasive if they include at least several large-scale, properly performed randomized trials. Meta-analysis can especially help detect benefits when individual trials are inadequately powered (e.g., the benefits of streptokinase thrombolytic therapy in acute MI demonstrated by ISIS-2 in 1988 were evident by the early 1970s through meta-analysis). However, in cases in which the available trials are small or poorly done, meta-analysis should not be viewed as a remedy for the deficiency in primary trial data.

Meta-analyses typically focus on summary measures of relative treatment benefit, such as odds ratios or relative risks. Clinicians also should examine what absolute risk reduction (ARR) can be expected from the therapy. A useful summary metric of absolute treatment benefit is the number needed to treat (NNT) to prevent one adverse outcome event (e.g., death, stroke). NNT is simply 1/ARR. For example, if a hypothetical therapy reduced mortality rates over a 5-year follow-up by 33% (the relative treatment benefit) from 12% (control arm) to 8% (treatment arm), the ARR would be 12% – 8% = 4%, and the NNT would be 1/0.04, or 25. Thus, it would be necessary to treat 25 patients for 5 years to prevent 1 death. If the hypothetical treatment was applied to a lower-risk population, say, with a 6% 5-year mortality, the 33% relative treatment benefit would reduce absolute mortality by 2% (from 6% to 4%), and the NNT for the same therapy in this lower-risk group of patients would be 50. Although not always made explicit, comparisons of NNT estimates from different studies should account for the duration of follow-up used to create each estimate.

CLINICAL PRACTICE GUIDELINES

According to the 1990 Institute of Medicine definition, clinical practice guidelines are “systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.” This definition emphasizes several crucial features of modern guideline development. First, guidelines are created by using the tools of EBM. In particular, the core of the development process is a systematic literature search followed by a review of the relevant peer-reviewed literature. Second, guidelines usually are focused on a clinical disorder (e.g., adult diabetes, stable angina pectoris) or a health care intervention (e.g., cancer screening). Third, the primary objective of guidelines is to improve the quality of medical care by identifying care practices that should be routinely implemented, based on high-quality evidence and high benefit-to-harm ratios for the interventions. Guidelines are intended to “assist” decision-making, not to define explicitly what decisions should be made in a particular situation, in part because evidence alone is never sufficient for clinical decision-making (e.g., deciding whether to intubate and administer antibiotics for pneumonia in a terminally ill individual, in an individual with dementia, or in an otherwise healthy 30-year-old mother).

Guidelines are narrative documents constructed by expert panels whose composition often is determined by interested professional organizations. These panels vary in the degree to which they represent all relevant stakeholders. The guideline documents consist of a series of specific management recommendations, a summary indication of the quantity and quality of evidence supporting each recommendation, an assessment of the benefit-to-harm ratio for the recommendation, and a narrative discussion of the recommendations. Many recommendations simply reflect the expert consensus of the guideline panel because literature-based evidence is absent. The final step in guideline construction is peer review, followed by a final revision in response to the critiques provided. To improve the reliability and trustworthiness of guidelines, the Institute of Medicine has made methodologic recommendations for guideline development.

Guidelines are closely tied to the process of quality improvement in medicine through their identification of evidence-based best practices. Such practices can be used as quality indicators. Examples include the proportion of acute MI patients who receive aspirin upon admission to a hospital and the proportion of heart failure patients with a depressed ejection fraction treated with an ACE inhibitor.

CONCLUSIONS

In this era of EBM, it is tempting to think that all the difficult decisions practitioners face have been or soon will be solved and digested into practice guidelines and computerized reminders. However, EBM provides practitioners with an ideal rather than a finished set of tools with which to manage patients. Moreover, even with such evidence, it is always worth remembering that the response to therapy of the “average” patient represented by the summary clinical trial outcomes may not be what can be expected for the specific patient sitting in front of a physician in the clinic or hospital. In addition, meta-analyses cannot generate evidence when there are no adequate randomized trials, and most of what clinicians confront in practice will never be thoroughly tested in a randomized trial. For the foreseeable future, excellent clinical reasoning skills, experience supplemented by well-designed quantitative tools, and a keen appreciation for the role of individual patient preferences in their health care will continue to be of paramount importance in the practice of clinical medicine.


4

Screening and Prevention of Disease

Katrina Armstrong, Gary J. Martin


 

A primary goal of health care is to prevent disease or detect it early enough that intervention will be more effective. Tremendous progress has been made toward this goal over the last 50 years. Screening tests are available for many common diseases and encompass biochemical (e.g., cholesterol, glucose), physiologic (e.g., blood pressure, growth curves), radiologic (e.g., mammogram, bone densitometry), and cytologic (e.g., Pap smear) approaches. Effective preventive interventions have resulted in dramatic declines in mortality from many diseases, particularly infections. Preventive interventions include counseling about risk behaviors, vaccinations, medications, and, in some relatively uncommon settings, surgery. Preventive services (including screening tests, preventive interventions, and counseling) are different than other medical interventions because they are proactively administered to healthy individuals instead of in response to a symptom, sign, or diagnosis. Thus, the decision to recommend a screening test or preventive intervention requires a particularly high bar of evidence that testing and intervention are both practical and effective.

image Because population-based screening and prevention strategies must be extremely low risk to have an acceptable benefit-to-harm ratio, the ability to target individuals who are more likely to develop disease could enable the application of a wider set of potential approaches and increase efficiency. Currently, there are many types of data that can predict disease incidence in an asymptomatic individual. Genomic data have received the most attention to date, at least in part because mutations in high-penetrance genes have clear implications for preventive care (Chap. 84). Women with mutations in either BRCA1 or BRCA2, the two major breast cancer susceptibility genes identified to date, have a markedly increased risk (5- to 20-fold) of breast and ovarian cancer. Screening and prevention recommendations include prophylactic oophorectomy and breast magnetic resonance imaging (MRI), both of which are considered to incur too much harm for women at average cancer risk. Some women opt for prophylactic mastectomy to dramatically reduce their breast cancer risk. Although the proportion of common disease explained by high-penetrance genes appears to be relatively small (5–10% of most diseases), mutations in rare, moderate-penetrance genes, and variants in low-penetrance genes, also contribute to the prediction of disease risk. The advent of affordable whole exome/whole genome sequencing is likely to speed the dissemination of these tests into clinical practice and may transform the delivery of preventive care.

Other forms of “omic” data also have the potential to provide important predictive information, including proteomics and metabolomics. These fields are earlier in development and have yet to move into clinical practice. Imaging and other clinical data may also be integrated into a risk-stratified paradigm as evidence grows about the predictive ability of these data and the feasibility of their collection. Of course, all of these data may also be helpful in predicting the risk of harms from screening or prevention, such as the risk of a false-positive mammogram. To the degree that this information can be incorporated into personalized screening and prevention strategies, it could also improve delivery and efficiency.

In addition to advances in risk prediction, there are several other factors that are likely to promote the importance of screening and prevention in the near term. New imaging modalities are being developed that promise to detect changes at the cellular and subcellular levels, greatly increasing the probability that early detection improves outcomes. The rapidly growing understanding of the biologic pathways underlying initiation and progression of many common diseases has the potential to transform the development of preventive interventions, including chemoprevention. Furthermore, screening and prevention offer the promise of both improving health and sparing the costs of disease treatment, an issue that has gained national attention with the continued growth in health care costs.

This chapter will review the basic principles of screening and prevention in the primary care setting. Recommendations for specific disorders such as cardiovascular disease, diabetes, and cancer are provided in the chapters dedicated to those topics.

BASIC PRINCIPLES OF SCREENING

The basic principles of screening populations for disease were published by the World Health Organization in 1968 (Table 4-1).

TABLE 4-1

PRINCIPLES OF SCREENING


The condition should be an important health problem.

There should be a treatment for the condition.

Facilities for diagnosis and treatment should be available.

There should be a latent stage of the disease.

There should be a test or examination for the condition.

The test should be acceptable to the population.

The natural history of the disease should be adequately understood.

There should be an agreed policy on whom to treat.

The cost of finding a case should be balanced in relation to overall medical expenditure.


In general, screening is most effective when applied to relatively common disorders that carry a large disease burden (Table 4-2). The five leading causes of mortality in the United States are heart diseases, malignant neoplasms, accidents, cerebrovascular diseases, and chronic obstructive pulmonary disease. Thus, many screening strategies are targeted at these conditions. From a global health perspective, these conditions are priorities, but malaria, malnutrition, AIDS, tuberculosis, and violence also carry a heavy disease burden (Chap. 2).

TABLE 4-2

LIFETIME CUMULATIVE RISK

image

image Having an effective treatment for early disease has proven challenging for some common diseases. For example, although Alzheimer’s disease is the sixth leading cause of death in the United States, there are no curative treatments and no evidence that early treatment improves outcomes. Lack of facilities for diagnosis and treatment is a particular challenge for developing countries and may change screening strategies, including the development of “see and treat” approaches such as those currently used for cervical cancer screening in some countries. A long latent or preclinical phase where early treatment increases the chance of cure is a hallmark of many cancers; for example, polypectomy prevents progression to colon cancer. Similarly, early identification of hypertension or hyperlipidemia allows therapeutic interventions that reduce the long-term risk of cardiovascular or cerebrovascular events. In contrast, lung cancer screening has historically proven more challenging because most tumors are not curable by the time they can be detected on a chest x-ray. However, the length of the preclinical phase also depends on the level of resolution of the screening test, and this situation changed with the development of chest computed tomography (CT). Low-dose chest CT scanning can detect tumors earlier and was recently demonstrated to reduce lung cancer mortality by 20% in individuals who had at least a 30-pack-year history of smoking. The short interval between the ability to detect disease on a screening test and the development of incurable disease also contributes to the limited effectiveness of mammography screening in reducing breast cancer mortality among premenopausal women. Similarly, the early detection of prostate cancer may not lead to a difference in the mortality rate because the disease is often indolent and competing morbidities, such as coronary artery disease, may ultimately cause mortality (Chap. 100). This uncertainty about the natural history is also reflected in the controversy about treatment of prostate cancer, further contributing to the challenge of screening in this disease. Finally, screening programs can incur significant economic costs that must be considered in the context of the available resources and alternative strategies for improving health outcomes.

METHODS OF MEASURING HEALTH BENEFITS

Because screening and preventive interventions are recommended to asymptomatic individuals, they are held to a high standard for demonstrating a favorable risk-benefit ratio before implementation. In general, the principles of evidence-based medicine apply to demonstrating the efficacy of screening tests and preventive interventions, where randomized controlled trials (RCTs) with mortality outcomes are the gold standard. However, because RCTs are often not feasible, observational studies, such as case-control designs, have been used to assess the effectiveness of some interventions such as colorectal cancer screening. For some strategies, such as cervical cancer screening, the only data available are ecologic data demonstrating dramatic declines in mortality.

Irrespective of the study design used to assess the effectiveness of screening, it is critical that disease incidence or mortality is the primary endpoint rather than length of disease survival. This is important because lead time bias and length time bias can create the appearance of an improvement in disease survival from a screening test when there is no actual effect. Lead time bias occurs because screening identifies a case before it would have presented clinically, thereby creating the perception that a patient lived longer after diagnosis simply by moving the date of diagnosis earlier rather than the date of death later. Length time bias occurs because screening is more likely to identify slowly progressive disease than rapidly progressive disease. Thus, within a fixed period of time, a screened population will have a greater proportion of these slowly progressive cases and will appear to have better disease survival than an unscreened population.

A variety of endpoints are used to assess the potential gain from screening and preventive interventions.

1. The absolute and relative impact of screening on disease incidence or mortality. The absolute difference in disease incidence or mortality between a screened and nonscreened group allows the comparison of size of the benefit across preventive services. A meta-analysis of Swedish mammography trials (ages 40–70) found that ~1.2 fewer women per 1000 would die from breast cancer if they were screened over a 12-year period. By comparison, ~3 lives per 1000 would be saved from colon cancer in a population (ages 50–75) screened with annual fecal occult blood testing (FOBT) over a 13-year period. Based on this analysis, colon cancer screening may actually save more women’s lives than does mammography. However, the relative impact of FOBT (30% reduction in colon cancer death) is similar to the relative impact of mammography (14–32% reduction in breast cancer death), emphasizing the importance of both relative and absolute comparisons.

2. The number of subjects screened to prevent disease or death in one individual. The inverse of the absolute difference in mortality is the number of subjects who would need to be screened or receive a preventive intervention to prevent one death. For example, 731 women ages 65–69 would need to be screened by dual-energy x-ray absorptiometry (DEXA) (and treated appropriately) to prevent one hip fracture from osteoporosis.

3. Increase in average life expectancy for a population. Predicted increases in life expectancy for various screening and preventive interventions are listed in Table 4-3. It should be noted, however, that the increase in life expectancy is an average that applies to a population, not to an individual. In reality, the vast majority of the population does not derive any benefit from a screening test or preventive intervention. A small subset of patients, however, will benefit greatly. For example, Pap smears do not benefit the 98% of women who never develop cancer of the cervix. However, for the 2% who would have developed cervical cancer, Pap smears may add as much as 25 years to their lives. Some studies suggest that a 1-month gain of life expectancy is a reasonable goal for a population-based screening or prevention strategy.

TABLE 4-3

ESTIMATED AVERAGE INCREASE IN LIFE EXPECTANCY FOR A POPULATION

image

ASSESSING THE HARMS OF SCREENING AND PREVENTION

Just as with most aspects of medical care, screening and preventive interventions also incur the possibility of adverse outcomes. These adverse outcomes include side effects from preventive medications and vaccinations, false-positive screening tests, overdiagnosis of disease from screening tests, anxiety, radiation exposure from some screening tests, and discomfort from some interventions and screening tests. The risk of side effects from preventive medications is analogous to the use of medications in therapeutic settings and is considered in the Food and Drug Administration (FDA) approval process. Side effects from currently recommended vaccinations are primarily limited to discomfort and minor immune reactions. However, the concern about associations between vaccinations and serious adverse outcomes continues to limit the acceptance of many vaccinations despite the lack of data supporting the causal nature of these associations.

The possibility of a false-positive test occurs with nearly all screening tests, although the definition of what constitutes a false-positive result often varies across settings. For some tests such as screening mammography and screening chest CT, a false-positive result occurs when an abnormality is identified that is not malignant, requiring either a biopsy diagnosis or short-term follow-up. For other tests such as Pap smears, a false-positive result occurs because the test identifies a wide range of potentially premalignant states, only a small percentage of which would ever progress to an invasive cancer. This risk is closely tied to the risk of overdiagnosis in which the screening test identifies disease that would not have presented clinically in the patient’s lifetime. Assessing the degree of overdiagnosis from a screening test is very difficult given the need for long-term follow-up of an unscreened population to determine the true incidence of disease over time. Recent estimates suggest that as much as 15–25% of breast cancers identified by mammography screening and 15–37% of prostate cancers identified by prostate-specific antigen testing may never have presented clinically. Screening tests also have the potential to create unwarranted anxiety, particularly in conjunction with false-positive findings. Although multiple studies have documented increased anxiety through the screening process, there are few data suggesting this anxiety has long-term adverse consequences, including subsequent screening behavior. Screening tests that involve radiation (e.g., mammography, chest CT) add to the cumulative radiation exposure for the screened individual. The absolute amount of radiation is very small from any of these tests, but the overall impact of repeated exposure from multiple sources is still being determined. Some preventive interventions (e.g., vaccinations) and screening tests (e.g., mammography) may lead to discomfort at the time of administration, but again, there is little evidence of long-term adverse consequences.

WEIGHING THE BENEFITS AND HARMS

The decision to implement a population-based screening and prevention strategy requires weighing the benefits and harms, including the economic impact of the strategy. The costs include not only the expense of the intervention but also time away from work, downstream costs from false-positive results or adverse events, and other potential harms. Cost-effectiveness is typically assessed by calculating the cost per year of life saved, with adjustment for the quality of life impact of different interventions and disease states (i.e., quality-adjusted life-year). Typically, strategies that cost <$50,000 to $100,000 per quality-adjusted year of life saved are considered “cost-effective” (Chap. 3).

The U.S. Preventive Services Task Force (USPSTF) is an independent panel of experts in preventive care that provides evidence-based recommendations for screening and preventive strategies based on an assessment of the benefit-to-harm ratio (Tables 4-4 and 4-5). Because there are multiple advisory organizations providing recommendations for preventive services, the agreement among the organizations varies across the different services. For example, all advisory groups support screening for hyperlipidemia and colorectal cancer, whereas consensus is lower for breast cancer screening among women in their 40s and almost nonexistent for prostate cancer screening. Because the guidelines are only updated periodically, differences across advisory organizations may also reflect the data that were available when the guideline was issued. For example, multiple organizations have recently issued recommendations supporting lung cancer screening among heavy smokers based on the results of the National Lung Screening Trial (NLST) published in 2011, whereas the USPSTF did not review lung cancer screening until 2014.

TABLE 4-4

SCREENING TESTS RECOMMENDED BY THE U.S. PREVENTIVE SERVICES TASK FORCE FOR AVERAGE-RISK ADULTS

image

TABLE 4-5

PREVENTIVE INTERVENTIONS RECOMMENDED FOR AVERAGE-RISK ADULTS

image

For many screening tests and preventive interventions, the balance of benefits and harms may be uncertain for the average-risk population but more favorable for individuals at higher risk for disease. Although age is the most commonly used risk factor for determining screening and prevention recommendations, the USPSTF also recommends some screening tests in populations with other risk factors for the disease (e.g., syphilis). In addition, being at increased risk for the disease often supports initiating screening at an earlier age than that recommended for the average-risk population. For example, when there is a significant family history of breast or colon cancer, it is prudent to initiate screening 10 years before the age at which the youngest family member was diagnosed with cancer.

Although informed consent is important for all aspects of medical care, shared decision-making may be a particularly important approach to decisions about preventive services when the benefit-to-harm ratio is uncertain for a specific population. For example, many expert groups, including the USPSTF, recommend an individualized discussion about prostate cancer screening, because the decision-making process is complex and relies heavily on personal issues. Some men may decline screening, whereas others may be more willing to accept the risks of an early detection strategy. Recent analysis suggests that many men may be better off not screening for prostate cancer because watchful waiting was the preferred strategy when quality-adjusted life-years were considered. Another example of shared decision-making involves the choice of techniques for colon cancer screening (Chap. 100). In controlled studies, the use of annual FOBT reduces colon cancer deaths by 15–30%. Flexible sigmoidoscopy reduces colon cancer deaths by ~60%. Colonoscopy offers the same benefit as or greater benefit than flexible sigmoidoscopy, but its use incurs additional costs and risks. These screening procedures have not been compared directly in the same population, but the estimated cost to society is similar: $10,000–25,000 per year of life saved. Thus, although one patient may prefer the ease of preparation, less time disruption, and the lower risk of flexible sigmoidoscopy, others may prefer the sedation and thoroughness of colonoscopy.

COUNSELING ON HEALTHY BEHAVIORS

In considering the impact of preventive services, it is important to recognize that tobacco and alcohol use, diet, and exercise constitute the vast majority of factors that influence preventable deaths in developed countries. Perhaps the single greatest preventive health care measure is to help patients quit smoking (Chap. 470). However, efforts in these areas frequently involve behavior changes (e.g., weight loss, exercise, seat belts) or the management of addictive conditions (e.g., tobacco and alcohol use) that are often recalcitrant to intervention. Although these are challenging problems, evidence strongly supports the role of counseling by health care providers (Table 4-6) in effecting health behavior change. Educational campaigns, public policy changes, and community-based interventions have also proven to be important parts of a strategy for addressing these factors in some settings. Although the USPSTF found that the evidence was conclusive to recommend a relatively small set of counseling activities, counseling in areas such as physical activity and injury prevention (including seat belts and bicycle and motorcycle helmets) has become a routine part of primary care practice.

TABLE 4-6

PREVENTIVE COUNSELING RECOMMENDED BY THE USPSTF

image

IMPLEMENTING DISEASE PREVENTION AND SCREENING

The implementation of disease prevention and screening strategies in practice is challenging. A number of techniques can assist physicians with the delivery of these services. An appropriately configured electronic health record can provide reminder systems that make it easier for physicians to track and meet guidelines. Some systems give patients secure access to their medical records, providing an additional means to enhance adherence to routine screening. Systems that provide nurses and other staff with standing orders are effective for smoking prevention and immunizations. The Agency for Healthcare Research and Quality and the Centers for Disease Control and Prevention have developed flow sheets and electronic tools as part of their “Put Prevention into Practice” program (http://www.uspreventiveservicestaskforce.org/tools.htm). Many of these tools use age categories to help guide implementation. Age-specific recommendations for screening and counseling are summarized in Table 4-7.

TABLE 4-7

AGE-SPECIFIC CAUSES OF MORTALITY AND CORRESPONDING PREVENTIVE OPTIONS

image

image

Many patients see a physician for ongoing care of chronic illnesses, and this visit provides an opportunity to include a “measure of prevention” for other health problems. For example, a patient seen for management of hypertension or diabetes can have breast cancer screening incorporated into one visit and a discussion about colon cancer screening at the next visit. Other patients may respond more favorably to a clearly defined visit that addresses all relevant screening and prevention interventions. Because of age or comorbidities, it may be appropriate with some patients to abandon certain screening and prevention activities, although there are fewer data about when to “sunset” these services. For many screening tests, the benefit of screening does not accrue until 5 to 10 years of follow-up, and there are generally few data to support continuing screening for most diseases past age 75. In addition, for patients with advanced diseases and limited life expectancy, there is considerable benefit from shifting the focus from screening procedures to the conditions and interventions more likely to affect quality and length of life.