Chapter 1 Ensuring Patient Safety in Surgery―First Do No Harm
Primum non nocere—first do no harm. This often-quoted phrase epitomizes the importance the medical community places on avoiding iatrogenic complications.1 In the process of providing care, patients, physicians, and the entire clinical team join to use all available medical weapons to combat disease to avert the natural history of pathologic processes. Iatrogenic injury or, simply, “treatment-related harm” occurs when this implicit rule to “first do no harm” is violated. Both society and the medical community have historically been intolerant of medical mistakes, associating them with negligence. The fact is that complex medical care is prone to failure. Medical mistakes are much like “friendly-fire” incidents in which soldiers in the high-tempo, complex fog of war mistakenly kill comrades rather than the enemy. Invariably, medical error and iatrogenic injury are associated with multiple latent conditions (constraints, hazards, system vulnerabilities, etc.) that predispose front-line clinicians to err. This chapter will review the science of human error in medicine and surgery. The specific case of wrong-sided brain surgery will be used as an illustration for implementation of emerging new strategies for enhancing patient safety.
The Nature of Iatrogenic Injury in Medicine and Surgery
The earliest practitioners of medicine recognized and described iatrogenic injury. Iatrogenic (Greek, iatros = doctor, genic = arising from or developing from) literally translates to “disease or illness caused by doctors.” Famous examples exist of likely iatrogenic deaths, such as that of George Washington, who died while being treated for pneumonia with blood-letting. The Royal Medical and Surgical Society, in 1864, documented 123 deaths that “could be positively assigned to the inhalation of chloroform.”2 Throughout history, physicians have reviewed unexpected outcomes related to the medical care they provided to learn and improve that care. The “father” of modern neurosurgery, Harvey Cushing, and his contemporary Sir William Osler modeled the practice of learning from error by publishing their errors openly so as to warn others on how to avert future occurrences.3–5 However, the magnitude of iatrogenic morbidity and mortality was not quantified across the spectrum of health care until the Harvard Practice Study, published in 1991.6 This seminal study estimated that iatrogenic failure occurs in approximately 4% of all hospitalizations and is the eighth leading cause of death in America—responsible for up to 100,000 deaths per year in the United States alone.7
A subsequent review of over 14,700 hospitalizations in Colorado and Utah identified 402 surgical adverse events, producing an annual incidence rate of 1.9%.8 The nature of surgical adverse events were categorized by type of injury and by preventability (Table 1-1).
Type of Event | Percentage of Adverse Events | Percentage Preventable |
---|---|---|
Technique-related complication | 24 | 68 |
Wound infection | 11 | 23 |
Postoperative bleeding | 11 | 85 |
Postpartum/neonatal related | 8 | 67 |
Other infection | 7 | 38 |
Drug-related injury | 7 | 46 |
Wound problem (noninfectious) | 4 | 53 |
Deep venous thrombosis | 4 | 18 |
Nonsurgical procedure injury | 3 | 59 |
Diagnostic error/delay | 3 | 100 |
Pulmonary embolus | 2 | 14 |
Acute myocardial infarction | 2 | 0 |
Inappropriate therapy | 2 | 100 |
Anesthesia injury | 2 | 45 |
Congestive heart failure | 1 | 33 |
Stroke | 1 | 0 |
Pneumonia | 1 | 65 |
Fall | 0.5 | 50 |
Other | 5.5 | 32 |
These two studies were designed to characterize iatrogenic complications in health care. While not statistically powered to allow surgical subspecialty analysis, it is likely that the types of failures and subsequent injuries that this study identified can be generalized to the neurosurgical patient population. More recent literature supports the findings of these landmark studies.9–11
The Institute of Medicine used the Harvard Practice Study as the basis for its report, which endorsed the need to discuss and study errors openly with the goal of improving patient safety.7 The Institute of Medicine report on medical errors, “To Err Is Human: Building a Safer Health System,” must be considered a landmark publication.12 It was published in 1999 and focused on medical errors and their prevention. This was followed by the development of other quality improvement initiatives such as the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) Sentinel Events Program.12
One might argue that morbidity and mortality reviews already achieve this aim. The “M&M” conference has a long history of reviewing negative outcomes in medicine. The goal of this traditional conference is to learn how to prevent future patients from suffering similar harm, and thus incrementally improve care. However, frank discussion of error is limited in M&M conferences. Also, the actual review practices fail to support deep learning regarding systemic vulnerabilities13; indeed, since M&M conferences do not explicitly require medical errors to be reviewed, errors are rarely addressed. One prospective investigation of four U.S. academic hospitals found that a resident vigilantly attending weekly internal medicine M&M conferences for an entire year would discuss errors only once. The surgical version of the M&M conference was better with error discussion. However, while surgeons discussed adverse events associated with error 77% of the time, individual provider error was the focus of the discussion and cited as causative of the negative outcome in 8 of 10 conference discussions.13 Surgical conference discussion rarely identified structural defects, resource constraints, team communication, or other system problems. Further limiting its utility, the M&M conference is reactive by nature and highly subject to hindsight bias. This is the basis for most clinical outcome reviews, focusing solely on medical providers and their decision making.14 In their report, titled “Nine Steps to Move Forward from Error” in medicine, human factors experts Cook and Woods challenged the medical community to resist the temptation to simplify the complexities that practitioners face when reviewing accidents post hoc. Premature closure by blaming the closest clinician hides the deeper patterns and multiple contributors associated with failure, and ultimately leads to naive “solutions” that are weak or even counterproductive.15 The Institute of Medicine has also cautioned against blaming an individual and recommending training as the sole outcome of case review.7 While the culture within medicine is to learn from failure, the M&M conference does not typically achieve this aim.
A Human Factors Approach to Improving Patient Safety
Murphy’s law—that whatever can go wrong will—is the common-sense explanation for medical mishaps. The science of safety (and how to create it), however, is not common sense. The field of human factors engineering grew out of a focus on human interaction with physical devices, especially in military or industrial settings. This initial focus on how to improve human performance addressed the problem of workers that are at high risk for injury while using a tool or machine in high-hazard industries. In the past several decades, the scope of this science has broadened. Human factors engineering is now credited with advancing safety and reliability in aviation, nuclear power, and other high hazard work settings. Membership in the Human Factors and Ergonomics Society in North America alone has grown to over 15,000 members. Human factors engineering and related disciplines are deeply interested in modeling and understanding mechanisms of complex system failure. Furthermore, these applied sciences have developed strategies for designing error prevention and building error tolerance into systems to increase reliability and safety, and these strategies are now being applied to the healthcare industry.16–21 The specialty of anesthesiology has employed this science to reduce the anesthesia-related mortality rate from approximately 1 in 10,000 in the 1970s to over 1 in 250,000 three decades later.22 Critical incident analysis was used by a bioengineer (Jeffrey Cooper) to identify preventable anesthesia mishaps in 1978.23 Cooper’s seminal work was supplemented by the “closed-claim” liability studies, which delineated the most common and severe modes of failure and factors that contributed to those failures. The specialty of anesthesiology and its leaders endorsed the precepts that safety stems more from improved system design than from increasing vigilance of individual practitioners. As a direct result, anesthesiology was the first specialty to adopt minimal standards for care and monitoring, preanesthesia equipment checklists similar to those used in commercial aviation, standardized medication labels, interlocking hardware to prevent gas mix-ups, international anesthesia machine standards, and the development of high-fidelity human simulation to support crisis team training in the management of rare events. Lucien Leape, a former surgeon, one of the lead authors of the Harvard Practice Study, and a national advocate for patient safety, has stated, “Anesthesia is the only system in healthcare that begins to approach the vaunted ‘six sigma’ (a defect rate of 1 in a million) level of clinical safety perfection that other industries strive for. This outstanding achievement is attributable not to any single practice or development of new anesthetic agents or even any type of improvement (such as technological advances) but to application of a broad array of changes in process, equipment, organization, supervision, training, and teamwork. However, no single one of these changes has ever been proven to have a clear-cut impact on mortality. Rather, anesthesia safety was achieved by applying a whole host of changes that made sense, were based on an understanding of human factors principles, and had been demonstrated to be effective in other settings.”24 The Anesthesia Patient Safety Foundation, which has become the clearinghouse for patient safety successes in anesthesiology, was used as a model by the American Medical Association to form the National Patient Safety Foundation in 1996.25 Over the subsequent decade, the science of safety has begun to permeate health care.
The human factors psychologist James Reason has characterized accidents as evolving over time and as virtually never being the consequence of a single cause.26,27 Rather, he describes accidents as the net result of local triggers that initiate and then propagate an incident through a hole in one layer of defense after another until irreversible injury occurs (Fig. 1-1). This model has been referred to as the “Swiss cheese” model of accident causation. Surgical care consists of thousands of tasks and subtasks. Errors in the execution of these tasks need to be prevented, detected, and managed, or tolerated. The layers of Swiss cheese represent the system of defenses against such error. Latent conditions is the term used to describe “accidents waiting to happen” that are the holes in each layer that will allow an error to propagate until it ultimately causes injury or death. The goal in human factors system engineering is to know all the layers of Swiss cheese and create the best defenses possible (i.e., make the holes as small as possible). This very approach has been the centerpiece of incremental improvements in anesthesia safety.
One structured approach designed to identify all holes in the major layers of cheese in medical systems has been described by Vincent.28,29 He classifies the major categories of factors that contribute to error as follows:
1. Patient factors: condition, communication, availability, and accuracy of test results and other contextual factors that make a patient challenging
2. Task factors: using an organized approach in reliable task execution, availability, and use of protocols, and other aspects of task performance
3. Practitioner factors: deficits and failures by any individual member of the care team that undermines management of the problem space in terms of knowledge, attention, strategy, motivation, physical or mental health, and other factors that undermine individual performance
4. Team factors: verbal/written communication, supervision/seeking help, team structure, and leadership, and other failures in communication and coordination among members of the care team such that management of the problem space is degraded
5. Working conditions: staffing levels, skills mix and workload, availability and maintenance of equipment, administrative and managerial support, and other aspects of the work domain that undermine individual or team performance
6. Organization and management factors: financial resources, goals, policy standards, safety culture and priorities, and other factors that constrain local microsystem performance
7. Societal and political factors: economic and regulatory issues, health policy and politics, and other societal factors that set thresholds for patient safety
If this schema is used to structure a review of a morbidity or mortality, that review will be extended beyond myopic attention to the singular practitioner. Furthermore, the array of identified factors that undermine safety can then be countered systematically by tightening each layer of defense, one hole at a time. I have adapted active error management as described by Reason and others into a set of steps for making incremental systemic improvements to increase safety and reliability. In this adaptation, a cycle of active error management consists of (1) surveillance to identify potential threats, (2) investigation of all contributory factors, (3) prioritization of failure modes, (4) development of countermeasures to eliminate or mitigate individual threats, and (5) broad implementation of validated countermeasures (Fig. 1-2).
FIGURE 1-2 Sequence of steps for identifying vulnerabilities and then implementing corrective measures.
(Copyright Blike 2002.)
A comprehensive review of the science of human factors and patient safety is beyond the scope of this chapter; neurosurgical patient safety has been reviewed, including ethical issues and the impact of legal liability.30 Safety in aviation and nuclear power has taken over four decades to achieve the cultural shift that supports a robust system of countermeasures and defenses against human error. However, it is practical to use an example to illustrate some of the human factors principles introduced. Consider this case example as a window into the future of managing the most common preventable adverse events associated with surgery (see Table 1-1).
Example of Medical Error: “Wrong-Sided Brain Surgery”
Wrong-site surgery is an example of an adverse event that seems as though it should “never happen.” However, given over 40 million surgical procedures annually, we should not be surprised when it occurs. The news media has diligently reported wrong-site surgical errors, especially when they involve neurosurgery. Headlines such as “Brain Surgery Was Done on the Wrong Side, Reports Say” (New York Daily News, 2001) and “Doctor Who Operated on the Wrong Side of Brain Under Scrutiny” (New York Times, 2000), are inevitable when wrong-site brain surgery occurs.31–33 As predicted, these are not isolated stories. A recent report from the state of Minnesota found 13 instances of wrong-site surgery in a single year during which time approximately 340,000 surgeries were performed.34 No hospital appeared to be immune to what appears on the surface to be such a blatant mistake. Indeed, an incomplete registry collecting data on wrong-site surgery since 2001 now includes over 150 cases. Of 126 instances that have been reviewed, 41% relate to orthopedic surgery, 20% relate to general surgery, 14% to neurosurgery, 11% to urologic surgery, and the remaining to the other surgical specialties.35 In a recent national survey,36 the incidence of wrong-sided surgery for cervical discectomies, craniotomies, and lumbar surgery was 6.8, 2.2, and 4.5 per 10,000 operations, respectively.
The sensational “front-page news” media fails to identify the deeper second story behind these failures and how to prevent future failures through creation of safer systems.37 In this example, we provide an analysis of contributory factors associated with wrong-site surgery to reveal the myriad of holes in the defensive layers of “cheese.” These holes will need to be eliminated to truly impact the frequency of this already rare event and create more reliable care for our patients.
Contributory Factor Analysis
Patient Factors Associated with Wrong-Site Surgery
Patient Condition (Medical Factors That If Not Known Increase the Risk for Complications)
Neurosurgical patients are at higher risk for wrong patient surgery than average. Patients and their surgical conditions contribute to error. When patients are asked what surgery they are having done on the morning of surgery, only 70% can correctly state and point to the location of the planned surgical intervention.38 Patients are a further source of misinformation of surgical intent when the pathology and symptoms are contralateral to the site of surgery, a common condition in neurosurgical cases. Patients scheduled for brain surgery and carotid surgery often confuse the side of the surgery with the side of the symptoms. Patients with educational or language barriers or cognitive deficits are more vulnerable since they are unable to accurately communicate their surgical condition or the planned surgery.
Certain operations in the neurosurgical population pose higher risk for wrong-site surgery. While left–right symmetry and sidedness represents one high-risk class of surgeries, spinal procedures in which there are multiple levels is another.39
Communication (Factors That Undermine the Patient’s Ability to Be a Source of Information Regarding Conditions That Increase the Risk for Complications and Need to Be Managed)
Obviously, patients with language barriers or cognitive deficits represent a group that may be unable to communicate their understanding of the surgical plan. This can increase the chance of patient identification errors that lead to wrong-site surgery. In a busy practice, patients requiring the same surgery might be scheduled in the same operating room (OR). It is not uncommon to perform five carotid endarterectomies in a single day.40 When one patient is delayed and the order switched to keep the OR moving, this vulnerability is expressed. Patients with common names are especially at risk. A 500-bed hospital will have approximately 1,000,000 patients in the medical record system. About 10% of patients will have the same first and last names. Five percent will have a first, middle, and last name in common with one other individual. Only by cross-checking the name with one other patient identifier (either birth date or a medical record number) can wrong-patient errors be trapped.41
Another patient communication problem that increases risk for wrong-site surgery consists of patients marking themselves. Marking the skin on the side of the proposed surgery with a pen is now common practice by the surgical team and part of the Universal Protocol. However, some patients have placed an X on the site not to be operated on. The surgical team has then confused this patient mark with their own in which an X specifies the side to be operated on. Patients are often not given information of what to expect and will seek outside information. For example, a neurosurgeon on a popular daytime talk show discussing medical mistakes stated incorrectly that patients should mark themselves with an X on the side that should not be operated on.42 This error in information reached millions of viewers, and was in direct violation of recommendations for marking provided by the Joint Commission on Accreditation of Healthcare Organizations (and endorsed by the American College of Surgeons, American Society of Anesthesiology, and Association of Operating Room Nurses). Patients who watched this show and took the advice of the physician are now at higher risk than average for a wrong-sided surgical error.
Availability and Accuracy of Test Results (Factors That Undermine Awareness of Conditions That Increase the Risk for Complications and Need to Be Managed)
Radiologic imaging studies can be independent markers of surgical pathology and anatomy. However, films and/or reports are not always available. Films may be lost or misplaced. Also, they may be unavailable because they were performed at another facility. New digital technology has created electronic imaging systems that virtually eliminate lost studies. However, space constraints have led many hospitals to remove old view boxes to make room for digital radiologic monitors. When patients bring films from an out-side hospital, this decision to eliminate view boxes prevents effective use of the studies. Even when available, x-rays and diagnostic studies are not labeled with 100% reliability. Imaging studies have been mislabeled and/or oriented backward, leading to wrong-sided surgery.43