Error, Man and Machine

Published on 27/02/2015 by admin

Filed under Anesthesiology

Last modified 27/02/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 2231 times

Chapter 29 Error, Man and Machine

Medicine is classed as a high-reliability organization (HRO) in common with aviation, aeronautics and the nuclear and petrochemical industries. Mistakes are costly in human and financial terms.

Over 3000 years ago, when Icarus flew too close to the sun, the wax on his wings melted and he crashed. The accident was the first recorded case of pilot error. The next person to try flying kept the wings but declined the wax. This canny individual had learnt from someone else’s mistake. ‘To err is human’ – we accept that error is, and always has been ubiquitous. Error is all around us, it is described in our literature and embedded in our lives.

Much has been written recently about the lessons that medicine can take from aviation,13 but there is resistance to the idea of transferring those ideas from other industries to healthcare. The usual reason given is that patients and their diseases are too complex to adapt to checklists and standard operating procedures. However, it is precisely because of this complexity that tools from aviation such as these can be life-saving. In acknowledgement of this we already do refer to algorithms such as those from the American Heart Association or European Resuscitation Council (ERC) to guide us as to the best practice to follow in resuscitating the patient.4,5

A willingness to learn from other HROs can reduce mistakes. It is arguable that by standardizing what repeatedly must be done correctly, we are freed to use skill and creativity to address what is unique.

For the anaesthetist, some understanding of the approach to error management in an analogous system where safety is similarly critical, such as in the aviation industry, should both instruct and enhance awareness.

Human factors in aviation

Today, almost 80% of aircraft accidents are due to human performance failures; and the introduction of Human Factors consideration into the management of safety critical systems has stemmed from the recognition of human error as the primary cause in a significant number of catastrophic events worldwide. In the late 1970s, a series of aircraft accidents occurred where, for the first time, the investigation established that there was nothing wrong with the aircraft and that the causal factor was poor decision-making and lack of situational awareness. The nuclear industry (Three Mile Island, Chernobyl) and the chemical industry (Piper Alpha, Bhopal) suffered similar accidents, where it became evident that the same issues, i.e. problem solving, prioritizing, decision-making, fatigue and reduced vigilance, were directly responsible for the disaster. It was clear, however, that the two-word verdict, ‘human error’, did little to provide insight into the reasons why people erred, or what the environmental and systems influences were that made the error inevitable. There followed a new approach to accident investigation that aimed to understand the previously under-estimated influence of the cognitive and physiological state of the individuals involved, as well as the cultural and organizational environment in which the event occurred. The objective of ‘Human Factors’ in aviation, as elsewhere, is to increase performance and reduce error: by understanding the personal, cognitive and organizational context in which we perform our tasks.

Human Factors brings into focus the fact that people are active participants in whatever they are doing, that they ‘do’ whatever makes sense at the time – based on the circumstances surrounding them. Individuals bring their own perspective, their own level of interest and their own state of wellbeing on the day. In many instances, they also have an emotional investment in the outcome in terms of professional pride. In other words, whereas it was once assumed that effective decision-making was the product of mechanistically making the correct choice in a rational, predictable manner every time, current understanding is that even the most superior decision-makers are vulnerable to the weaknesses of the systems in which they operate. Error, therefore, is a product of the context in which it occurs.

A commonly used diagram, which is useful in forming a basic understanding of the man/machine interface and human factors, is the SHEL model, first described by Edwards in 1972 and later refined by Hawkins in 1975 (Fig. 29.1). Each block represents one of the five components in the relationship, with liveware (human) always being in the central position. The blocks are, however, irregular in shape and must be carefully matched together in order to form a perfect fit. A mismatch highlights the potential for error.

For example:

Understanding error

Aircraft accidents are seldom due to one single catastrophic failure. Investigations invariably uncover a chain of events – a trail of errors, where each event was inexorably linked to the next and each event was a vital contributor to the outcome. The advantage in this truth is that early error detection and containment can prevent links from ever forming a chain. Errors can be defined as lapses, slips and misses, or errors of omission and errors of commission (see Appendix 1 for definition of these terms in the context of the study of error). Some errors occur as a direct result of fatigue, where maintaining vigilance becomes increasingly difficult. Conversely, others are a product of condensed time frames, where items are simply missed. Whatever its source, it must be recognized that error is forever present in both operational and non-operational life. High error rates tell a story; they are indicative of a system that either gives rise to, or fails to prevent, them. In other words, when errors are identified, it should be appreciated that they are the symptoms and not the disease.

The ‘Swiss Cheese analogy’ is often used to explain how a system that is full of holes (ubiquitous errors, systems failures, etc.) can appear solid for most of the time. Thankfully, it is only rarely that the holes line up or cluster (e.g. adverse environmental circumstance + error + poor equipment design + fatigue) such that catastrophic failure occurs (Fig. 29.2). Error management aims to reduce the total number of holes, such that the likelihood of clustering by chance is reduced (see below).

image

Figure 29.2 The Swiss Cheese Model, originally by James Reason,6 demonstrates the multifactorial nature of a sample ‘accident’ and can be used to explain how latent conditions for an incident may lie dormant for a long time before combining with other failures (or viewed differently: breaches in successive layers of defences) to lead to catastrophe. As an example the organizational failure may be the expectation of an inadequately trained member of staff to perform a given task.

Root causes of adverse events

Forty years ago, getting on board a commercial aircraft was a more risky proposition than it is today and arriving safely at your intended destination was not guaranteed. As the airlines focused on improving their safety record, four reasons emerged as to the root causes of these disasters:

The catalyst event may immediately precede the incident or seem unrelated to it: for example a decision is made to make a component part (e.g. jackscrew assembly) on an aircraft out of two dissimilar metals that will not wear at the same rate as each other (see below).

There are many examples of system faults that cause or contribute to errors and adverse events: scheduling systems that do not allow for adequate rest for personnel, computer systems that flag unimportant information, but fail to alert the physician about an important drug interaction.

Once the catalyst event has occurred and the system faults have contributed to, rather than mitigated the error, the patient or passengers must now rely on human beings within the system to prevent harm. The greatest tool in aviation and in medicine is situational awareness: the ability to keep the whole picture in focus, to not get so lost in detail that one forgets to fly the plane or to prioritize patient ventilation over achieving tracheal intubation.7

The final cause of an adverse event is human error. It is almost never the only cause and when we focus on ‘who is to blame’ we miss our opportunity to ‘find a system to fix’.

An example from aviation

On 31 January 2000, an Alaska Airlines MD80 departed Puerto Vallarta in Mexico for Seattle, Washington, with a scheduled stop in San Francisco. Sixteen minutes into the flight the autopilot tripped off, indicating a malfunction of the autopilot or flight control system. Investigators later concluded this was due to in-flight failure of the horizontal stabilizer jackscrew and Acme nut thread assembly.

The Captain was under pressure from the airline dispatcher to return the plane to the USA, so he continued, sometimes exerting as much as 150 pounds of force on the jackscrew assembly in order to maintain control of the aircraft. What the crew did not know was that when the MD 80 was designed, dissimilar metals were used in the construction of the jackscrew assembly. Because this assembly experienced heavy flight loads, the softer metal wore at a greater rate than the harder metal, and so the assembly became loose (Catalyst event).

Knowing that this could lead to catastrophic failures, the manufacturer had conducted extensive tests and issued specific procedures for inspection and lubrication of the jackscrew assembly. Other airlines, having complied with the manufacturer’s approved procedures, never experienced such failures. However, because this process was difficult and time-consuming, the airline had received permission from the principal operations inspector, assigned by the Federal Aviation Authority (FAA) to that airline, to inspect at intervals of 2300 hours, instead of the 700 hours recommended by the manufacturer. When the airplane was inspected by maintenance a few days earlier, it was found to be ‘outside of tolerances’, but was overridden by a line supervisor, and the aircraft was placed back into service (System faults).

Within sight of the airport, and after recovering from a dive, the crew discussed the difficulties with maintenance. Unaware of the gravity of the situation, and ignoring the advice of the first officer who was urging him to land immediately, the captain elected to do ‘a little troubleshooting’ (Loss of situational awareness and human error). Two hours and 42 min into the flight, they lost total control of the aircraft, flying it upside down for 70 s before finally plunging into the Pacific Ocean off the coast of California.