Error, Man and Machine

Published on 27/02/2015 by admin

Filed under Anesthesiology

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 2436 times

Chapter 29 Error, Man and Machine

Medicine is classed as a high-reliability organization (HRO) in common with aviation, aeronautics and the nuclear and petrochemical industries. Mistakes are costly in human and financial terms.

Over 3000 years ago, when Icarus flew too close to the sun, the wax on his wings melted and he crashed. The accident was the first recorded case of pilot error. The next person to try flying kept the wings but declined the wax. This canny individual had learnt from someone else’s mistake. ‘To err is human’ – we accept that error is, and always has been ubiquitous. Error is all around us, it is described in our literature and embedded in our lives.

Much has been written recently about the lessons that medicine can take from aviation,13 but there is resistance to the idea of transferring those ideas from other industries to healthcare. The usual reason given is that patients and their diseases are too complex to adapt to checklists and standard operating procedures. However, it is precisely because of this complexity that tools from aviation such as these can be life-saving. In acknowledgement of this we already do refer to algorithms such as those from the American Heart Association or European Resuscitation Council (ERC) to guide us as to the best practice to follow in resuscitating the patient.4,5

A willingness to learn from other HROs can reduce mistakes. It is arguable that by standardizing what repeatedly must be done correctly, we are freed to use skill and creativity to address what is unique.

For the anaesthetist, some understanding of the approach to error management in an analogous system where safety is similarly critical, such as in the aviation industry, should both instruct and enhance awareness.

Human factors in aviation

Today, almost 80% of aircraft accidents are due to human performance failures; and the introduction of Human Factors consideration into the management of safety critical systems has stemmed from the recognition of human error as the primary cause in a significant number of catastrophic events worldwide. In the late 1970s, a series of aircraft accidents occurred where, for the first time, the investigation established that there was nothing wrong with the aircraft and that the causal factor was poor decision-making and lack of situational awareness. The nuclear industry (Three Mile Island, Chernobyl) and the chemical industry (Piper Alpha, Bhopal) suffered similar accidents, where it became evident that the same issues, i.e. problem solving, prioritizing, decision-making, fatigue and reduced vigilance, were directly responsible for the disaster. It was clear, however, that the two-word verdict, ‘human error’, did little to provide insight into the reasons why people erred, or what the environmental and systems influences were that made the error inevitable. There followed a new approach to accident investigation that aimed to understand the previously under-estimated influence of the cognitive and physiological state of the individuals involved, as well as the cultural and organizational environment in which the event occurred. The objective of ‘Human Factors’ in aviation, as elsewhere, is to increase performance and reduce error: by understanding the personal, cognitive and organizational context in which we perform our tasks.

Human Factors brings into focus the fact that people are active participants in whatever they are doing, that they ‘do’ whatever makes sense at the time – based on the circumstances surrounding them. Individuals bring their own perspective, their own level of interest and their own state of wellbeing on the day. In many instances, they also have an emotional investment in the outcome in terms of professional pride. In other words, whereas it was once assumed that effective decision-making was the product of mechanistically making the correct choice in a rational, predictable manner every time, current understanding is that even the most superior decision-makers are vulnerable to the weaknesses of the systems in which they operate. Error, therefore, is a product of the context in which it occurs.

A commonly used diagram, which is useful in forming a basic understanding of the man/machine interface and human factors, is the SHEL model, first described by Edwards in 1972 and later refined by Hawkins in 1975 (Fig. 29.1). Each block represents one of the five components in the relationship, with liveware (human) always being in the central position. The blocks are, however, irregular in shape and must be carefully matched together in order to form a perfect fit. A mismatch highlights the potential for error.

For example:

Understanding error

Aircraft accidents are seldom due to one single catastrophic failure. Investigations invariably uncover a chain of events – a trail of errors, where each event was inexorably linked to the next and each event was a vital contributor to the outcome. The advantage in this truth is that early error detection and containment can prevent links from ever forming a chain. Errors can be defined as lapses, slips and misses, or errors of omission and errors of commission (see Appendix 1 for definition of these terms in the context of the study of error). Some errors occur as a direct result of fatigue, where maintaining vigilance becomes increasingly difficult. Conversely, others are a product of condensed time frames, where items are simply missed. Whatever its source, it must be recognized that error is forever present in both operational and non-operational life. High error rates tell a story; they are indicative of a system that either gives rise to, or fails to prevent, them. In other words, when errors are identified, it should be appreciated that they are the symptoms and not the disease.

The ‘Swiss Cheese analogy’ is often used to explain how a system that is full of holes (ubiquitous errors, systems failures, etc.) can appear solid for most of the time. Thankfully, it is only rarely that the holes line up or cluster (e.g. adverse environmental circumstance + error + poor equipment design + fatigue) such that catastrophic failure occurs (Fig. 29.2). Error management aims to reduce the total number of holes, such that the likelihood of clustering by chance is reduced (see below).

image

Figure 29.2 The Swiss Cheese Model, originally by James Reason,6 demonstrates the multifactorial nature of a sample ‘accident’ and can be used to explain how latent conditions for an incident may lie dormant for a long time before combining with other failures (or viewed differently: breaches in successive layers of defences) to lead to catastrophe. As an example the organizational failure may be the expectation of an inadequately trained member of staff to perform a given task.

Root causes of adverse events

Forty years ago, getting on board a commercial aircraft was a more risky proposition than it is today and arriving safely at your intended destination was not guaranteed. As the airlines focused on improving their safety record, four reasons emerged as to the root causes of these disasters:

The catalyst event may immediately precede the incident or seem unrelated to it: for example a decision is made to make a component part (e.g. jackscrew assembly) on an aircraft out of two dissimilar metals that will not wear at the same rate as each other (see below).

There are many examples of system faults that cause or contribute to errors and adverse events: scheduling systems that do not allow for adequate rest for personnel, computer systems that flag unimportant information, but fail to alert the physician about an important drug interaction.

Once the catalyst event has occurred and the system faults have contributed to, rather than mitigated the error, the patient or passengers must now rely on human beings within the system to prevent harm. The greatest tool in aviation and in medicine is situational awareness: the ability to keep the whole picture in focus, to not get so lost in detail that one forgets to fly the plane or to prioritize patient ventilation over achieving tracheal intubation.7

The final cause of an adverse event is human error. It is almost never the only cause and when we focus on ‘who is to blame’ we miss our opportunity to ‘find a system to fix’.

An example from aviation

On 31 January 2000, an Alaska Airlines MD80 departed Puerto Vallarta in Mexico for Seattle, Washington, with a scheduled stop in San Francisco. Sixteen minutes into the flight the autopilot tripped off, indicating a malfunction of the autopilot or flight control system. Investigators later concluded this was due to in-flight failure of the horizontal stabilizer jackscrew and Acme nut thread assembly.

The Captain was under pressure from the airline dispatcher to return the plane to the USA, so he continued, sometimes exerting as much as 150 pounds of force on the jackscrew assembly in order to maintain control of the aircraft. What the crew did not know was that when the MD 80 was designed, dissimilar metals were used in the construction of the jackscrew assembly. Because this assembly experienced heavy flight loads, the softer metal wore at a greater rate than the harder metal, and so the assembly became loose (Catalyst event).

Knowing that this could lead to catastrophic failures, the manufacturer had conducted extensive tests and issued specific procedures for inspection and lubrication of the jackscrew assembly. Other airlines, having complied with the manufacturer’s approved procedures, never experienced such failures. However, because this process was difficult and time-consuming, the airline had received permission from the principal operations inspector, assigned by the Federal Aviation Authority (FAA) to that airline, to inspect at intervals of 2300 hours, instead of the 700 hours recommended by the manufacturer. When the airplane was inspected by maintenance a few days earlier, it was found to be ‘outside of tolerances’, but was overridden by a line supervisor, and the aircraft was placed back into service (System faults).

Within sight of the airport, and after recovering from a dive, the crew discussed the difficulties with maintenance. Unaware of the gravity of the situation, and ignoring the advice of the first officer who was urging him to land immediately, the captain elected to do ‘a little troubleshooting’ (Loss of situational awareness and human error). Two hours and 42 min into the flight, they lost total control of the aircraft, flying it upside down for 70 s before finally plunging into the Pacific Ocean off the coast of California.

Decision-making

Decision-makers are critically dependent on the quality of information they receive, be it verbal, computerized, or in the form of checklists or procedures. If the information is wrong, or delivered late, the decision and subsequent action will be incorrect. Decision-making is also dependent upon training, experience and situational awareness (see below).

Psychological research has shown that the human brain is able to make only a single decision at any one instant. The individual can, therefore, attend to only one process at a time and, although he can change from one process to another extremely rapidly, the danger of preoccupation is obvious. Computers (rationality) ignore the impact of time and emotion in the decision-making process. The attraction of computers to assist in collating information and decision-making is, therefore, evident.

Advancing technology has, in many instances, brought increasing complexity in the machinery, whilst at the same time seductively simplifying what is seen on screen. For instance, in older aircraft, the flight crew are surrounded by a vast number of dials, bars and indicators that provide a continuous source of raw data, which they interpret and formulate into a mental model of the aircraft’s position and status. However, notwithstanding the banks of visible instrumentation, there is relatively little going on ‘behind’ and the logic of the systems is fairly transparent: it is quite manageable for the human brain to visualize, to interpret and to understand. Any discrepancies, either between instruments or between the instruments and the pilots’ own mental model, immediately trigger investigation.

Conversely, one of the most striking features of the highly automated flight deck is the tidiness and absence of dials. The logic of the systems is now hidden deep in the computers and easy access to the ‘visualization’ or the ‘conceptualization’ of the big picture is lost. This construct is becoming increasingly pertinent in modern anaesthetic workstations with their integral monitoring and automated startup checklists. This complexity is potentially very fragile: a small problem could collapse a whole system and lure the pilot (or anaesthetist) down deceptive and unnecessary computerized pathways. Equally, complexity can also provide safeguards that we simply never see. Either way, system action has become invisible. Software programmes are opaque and decision-making has changed because of it. Perhaps the irony of our relationship with technology is that the more advanced the systems, the more we need the human being to solve all the problems we unwittingly designed into them in the first place.

Situational awareness

Effective decision-making is based on judgement, which in turn is based on good-quality information and accurate situational awareness. In aviation, decision-making calls for a three-dimensional appreciation of the environment (considered a hostile environment) that is travelling at speeds of up to 500 miles per hour. There is the added pressure that the consequence of error can be catastrophic. In order to minimize the potential for error, human factors training has provided flight crews with a basic understanding of the significance of mental models, perception and where and how error is most likely to occur.

The term ‘situational awareness’ describes a dynamic state of (cognitive) awareness that allows for the integration of information and the use of it to anticipate changes in the current environment. Every individual has his or her own ‘picture’, or perception, of the environment. This mental model is informed by culture, training and previous experience. But no matter how familiar the territory, an individual’s perception of a situation may not be the right one and, if left unchallenged, could lead to faulty decisions and serious consequences. Crews are, therefore, taught to confer and cross-check their mental models before making assumptions or decisions that give rise to mistakes. They are thus enhancing their situational awareness and using this knowledge to make predictive judgements on the progress of the flight (Fig 29.3).

In a dynamic, fluid environment, where there is little time to go back and correct errors, it has long been appreciated that it is better to spend time making accurate decisions than to be caught in a reactive mode of trying to correct error. Errors inevitably give rise to more errors, a process that can suck its victim down a root and branch corridor of mistakes literally miles away from the original task at hand.

Fatigue, vigilance and arousal

Decision-making ability relies on vigilance so that pertinent information may be detected and it is dependent on the individual’s level of arousal.

Medicine is replete with examples where fatigue has contributed to or caused an error. A study in interns (first year trainees) confirmed that there was a higher rate of needle stick injuries in those who were fatigued.8 The Australian Incident Monitoring Study (AIMS) showed fatigue to be a ‘factor contributing to the incident’ in 2.7% of voluntary, anonymous, self-reported incidents and words such as ‘haste, inattention, failure to check’ often appeared in these reports.9 Fatigue positive reports most commonly involved pharmacologic incidents such as syringe swaps and over- or under-dosage. A study of anaesthesia trainees at Stanford confirmed that chronic fatigue can be as harmful as acute fatigue.10

Vigilance, from the Latin vigilantia for watchfulness, is defined in the dictionary as ‘being keenly alert to danger’. In 1943, research by the Royal Air Force showed that vigilance, requiring continuous monitoring and detection of brief, low-intensity and infrequently occurring events over long periods, is poor. This is illustrated in Fig. 29.4, which shows rapid fall-off in vigilance after a period as short as half an hour. In acknowledgement of this, modern anaesthesia machines and monitors allow alarm parameters to be set for all measured variables; this level of vigilance monitoring was unheard of 25 years ago (see Chapter 4, The Anaesthetic Workstation).

Arousal is the level of ‘wakefulness’. For any task there is a level of arousal at which one performs most efficiently, as shown in Fig. 29.5. Surprisingly, this optimal level decreases as the difficulty of the task increases. As such, overarousal often occurs in emergency situations when difficult tasks may need to be carried out rapidly. Underarousal for a particular task slows decision-making and makes it less accurate and additionally reduces vigilance. Underarousal also occurs with boredom and sleep deprivation. Table 29.1 shows some common stressors, which adversely affect performance.

Communication styles

In 2000 Sexton and others reported on ‘error, stress and teamwork in medicine and aviation’.2 They found that at that time, although most intensive care medical staff and pilots were in favour of a flat hierarchy, only 68% of surgeons favoured this. A flat hierarchy is not the answer to all of our communication difficulties in the operating room, in fact it is important that decisions are made and that at times we each need to assume the role of team leader. However, it is critical in the operating theatre that the anaesthetist is open to receiving input from others, including trainees and others whom he may consider to be in a subordinate role.

The Shiva factor

As co-authors with perspectives from both the commercial airline industry and paediatric anaesthesia, we find ourselves with the rare facility to see correlations within medicine and aviation. Here we will consider the personality types and behaviour patterns that may be error prone.

We believe that there are four identifiable steps which humans take that lead to adverse events. These we have collectively termed the ‘Shiva factor’, named after the head of the Hindu Trimurti, Shiva, who can be both the preserver of life, and the destroyer.1

The four steps are:

The ‘Shiva factor’ in action was vividly displayed on 13 June 1982, at Washington DC National Airport. An unusually severe snowstorm had closed the airport, but by mid-afternoon it was reopened, and several aircraft had departed safely. An Air Florida Boeing 737-200 (Palm 90) then crashed in the Potomac River, hitting the 14th Street Bridge with the loss of 78 lives. The postcrash investigation revealed that the flight crew had used inappropriate de-icing procedures. The captain, possibly unaware of the proper procedures (as he was from a Florida based airline with little experience in severe winter weather; lack of proficiency), ‘postured’ in an attempt to mitigate his own fears and concerns as the crisis developed. His tone of voice on the cockpit voice recorder did not convey appropriate concern about the prevailing conditions (Shiva factor 1) and when his first officer questioned him, he was quick to silence him, attempting to appear as if he had everything under control (Shiva factor 3). In attempts to push back from the gate, the Captain used reverse thrust to aid the tug. This was not a wise or approved procedure for the existing conditions. The snow, which was blown forward, plugged the pitot tubes on the intake of the engines. The pressures from the pitot system are used to generate an engine pressure ratio (EPR), which pilots use to confirm their power settings. The flight crew failed to turn on the engine anti-ice system, which should have prevented the excessively high false reading. All the other indications showed that the EPR gauge was wrong. The first officer brought this to the attention of the captain who has the sole authority to reject the take off. Initially he was ignored (Shiva factor 3). When he again questioned the accuracy of the indications and questioned the captain’s assessment, he was overruled. (Shiva factor 2, Shiva factor 4). They continued the take off at a reduced power setting (because the EPR gauge falsely indicated that they had all the power that was available). Even when the stall warning system issued the appropriate warnings (no failure of technology) they failed to push the thrust levers fully up (failure of proficiency and judgement). That simple action, taken at the right time, might have averted the disaster.

We all have seen colleagues who unfortunately have exhibited the Shiva factors in the operating theatre.

Volant diagram

This reverse Volant diagram (Fig. 29.6) illustrates a global view of a situation. Red indicates a critical situation, yellow/amber indicates a situation that is not perfect but may be good enough to allow the team to stabilize the patient and to think about how best to resolve the problem. Green indicates an optimum situation with all systems functioning as intended. If a catastrophic (red) situation arises our tendency is to want to go back to green quickly and to hope that no one noticed. An example of this in practice is the unanticipated difficult airway. Faced with a patient who is difficult to intubate, our tendency is to want to try to accomplish that task, especially if we did not expect it to be difficult. We want to be back in the green and we hope no one will notice (Shiva factor 1). However, repeated attempts to intubate the patient may worsen the situation resulting in the feared combination of cannot intubate/cannot ventilate. Guidelines from the American Society of Anesthesiologists and the Difficult Airway Society in the UK11,12 do not recommend repeated intubation attempts and yet anaesthetists still have a tendency to persist in this path (Shiva factor 4). A much safer path is to return to mask ventilation or to place a laryngeal mask airway, an amber/yellow situation. This buys thinking time, so that additional technology may be brought in (difficult airway equipment), additional proficiency may be added (more/different personnel), guidelines can be referred to (standard operating procedure) and the judgement of the individual trying to intubate the patient can be enhanced by using the collective wisdom of those around him.

Error management

Helmreich defines error management as ‘the process of correcting an error before it becomes consequential to safety’.13

One of the principal methods of error avoidance/containment enshrined in flight safety is the use of ‘briefing’ and an understanding of its role in threat and error management. Every critical aspect of flight, and the conditions along the way, represents a potential threat that could cause the pilots to err. Such threats are referred to as ‘red flags’. Early identification and planning against threats reduces the likelihood of error. It also increases vigilance at those times when threats are anticipated.

Every flight begins with planning and briefing. The details of the flight plan and the reasons behind each decision are fully discussed and understood by everyone. The core purpose of the briefing is to establish a mutual mental model between crew members prior to departure and, equally importantly, to provide the opportunity for any additional information, relevant experience, or even subjective opinion, to be aired and added to the crews’ collective situational awareness. It is recognized here that a steep authority gradient stifles information flow and a ‘superior’ attitude can induce stress and provoke errors in the subordinate. The preparedness consequent to adequate and appropriate planning and briefing affords the crew more mental capacity when variances occur, as they inevitably will and do.

Briefings continue throughout all major phases of flight. At particularly crucial phases of flight, i.e. approach and landing, the briefing rate increases and the ‘challenge/response’ use of checklists becomes more critical in error capture and mitigation. In this instance, the external environment is considered replete with potential ‘threats’ which, if they do not recognize and manage, will cause the crew to make errors. The greater the understanding of the threat posed by the circumstances, the less the likelihood of error arising. It is perhaps this discipline; of briefing, conferring and cross-checking, that most markedly distinguishes the aviation industry from anaesthesia and medicine.

Perhaps one of the most successful training initiatives has been the introduction, in the late 1980s, of ‘real-time’ flying scenarios, known as line orientated flight training (LOFT), which has enhanced the meta cognitive (awareness of one’s own thinking process) aspect of aircrew training. These simulator scenarios are less technical and instead present the crews with situations that have many options and/or priorities of varying complexity. The ‘consequences’ are the product of the quality of the decisions and subsequent actions. It is virtually certain that successively poor decisions will result in a technical failure. Crews experience the outcome of their own decision pathways in a safe learning environment. The crews debrief the simulator session using human factors guidelines as performance criteria, with an instructor who is specifically accredited in this process.

Barriers for safety

Just as there are four root causes of adverse events, there are also four main barriers for safety, which may trap the error or mitigate the consequences:

Checklists are specifically addressed below, whilst the other categories are referred to in context in the text adjoining this section.

Checklists

Checklists14 introduce consistency and uniformity in performance, across a broad spectrum of individuals and situations, and form the basis for successful implementation of evidence-based standard operating procedures.

Human memory is notoriously unreliable for consistency in complex procedures. It worsens if you add stress and fatigue.15 In a hierarchical situation, the mandatory use of checklists can empower subordinates to insist on the adherence to approved and safe procedures.

In aviation, checklists are used in both normal and abnormal situations. They may be done individually or in a pair, with one pilot doing and the other confirming each step. Checklists may be simply a list, or may follow a flow pattern or algorithm. For abnormal checklists, there may be a quick reference handbook, with supplemental information in a pilot handbook. A similar situation in anaesthesia would be a cardiac arrest due to local anaesthetic, where one would initially refer to the advanced cardiac life support (ACLS) or ERC guidelines, but may then refer to other materials for guidance about Intralipid dosing,16 which is specific to this situation.

In an emergency situation, such as an engine fire, the approach was to use a memorized checklist. In medicine we rely on memorized lists, not wishing to appear lacking in knowledge. However, in highly stressful situations, items may be forgotten, so aviation has moved to a system where memorization of the first critical step is expected but then an electronic or paper checklist is used. An example would be a rapid decompression incident where the first step is that the pilot must secure his or her oxygen mask, subsequent steps are done using a checklist. The non-flying pilot reads and performs each step on the checklist, the flying pilot confirms steps, but has no other responsibility besides flying the plane. An example from anaesthesia would be the management of malignant hyperthermia. This rare complication will only be encountered by few anaesthetists during their career, but each must know how to respond. The first step is to discontinue trigger agents and then to ‘turn up the oxygen and call for help’: advice that has been given to trainees for years. All subsequent steps may be guided by a written or web-based checklist.17

When using checklists it is critical to appreciate the importance of resuming if a checklist is interrupted. Checklists should not be applied blindly without using confirmatory evidence to confirm that the action makes sense.

An example from anaesthesia

An anaesthetic trainee working without immediate supervision performs what he thinks is an adequate machine check from memory and fails to check the integrity of the breathing system. The reservoir bag has a large split in the wall along one fold so that it is not visible. The patient is anaesthetised and is temporarily apnoeic. The anaesthetist attempts bag and mask ventilation and the patient starts to desaturate.

The catalyst event is the absence of the senior anaesthetist due to illness. The system fault lies in allowing an inexperienced trainee to anaesthetize without adequate supervision. The outcome of this incident will now depend upon the situational awareness of the anaesthetist. Will he persist in his attempts to ventilate with a split bag or will he realise that he is now dealing with equipment failure?

In analyzing the event, if attention is only on the trainee, then the opportunity to ‘fix the system’ will be missed. If, however, the response is to impose a ‘sign off’ of a written equipment checklist and to prohibit inexperienced anaesthetists from working alone in remote areas, then many more critical situations will be prevented. In this scenario technology can not be improved upon: reservoir bags cannot be made indestructible. Lack of proficiency will make the outcome worse if the anaesthetist does not have the knowledge and skills to carry out the necessary emergency procedures. Standard operating procedure will greatly assist the team in this situation: use of a self-inflating bag in the first instance and a call for senior help. The judgement of the anaesthetist will be important. He must recognize that the situation has changed and he must not ‘posture’ but must declare the emergency and get appropriate help.

References

1 Rampersad C, Rampersad SE. Can medicine really learn anything from aviation? Or are patients and their disease processes too complex? Semin Anesth. 2007;26:158–166.

2 Sexton JB, Thomas EJ, Helmreich RL. Error, stress, and teamwork in medicine and aviation: cross-sectional surveys. Semin Anesth. 2000;320:745–749.

3 Sexton JB, Marsch SC, Helmreich RL, Betzendoefer D, Kocher T, Scheidegger D, and the TOMS team. Jumpseating in the operating room. In: Henson L, Lee A, Basford A, eds. Simulators in anesthesiology education. New York: Plenum; 1998:107–108.

4 ECC Committee, Subcommittees and Task Forces of the American Heart Association. American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2005;112(24 Suppl):IV1–203. Epub 2005 Nov 28

5 Nolan JP, Baskett PJF. European Resuscitation Council guidelines for resuscitation 2005. editors. Resuscitation. 2005;67(1 Suppl):S1–S190.

6 Reason JT. Managing the risks of organizational accidents. Aldershot: Ashgate; 1997.

7 http://www.chfg.org/resources/07_qrt04/Anonymous_Report_Verdict_and_Corrected_Timeline_Oct_07.pdf. Accessed July 17 2011

8 Ayas NT, Barger LK, Cade BE, Hashimoto DM, Rosner B, Cronin JW, et al. Extended work duration and the risk of self-reported percutaneous injuries in interns. Resuscitation. 2006;296:1055–1062.

9 Morris GP, Morris RW. Anaesthesia and fatigue: an analysis of the first 10 years of the Australian Incident Monitoring Study 1987–1997. Anaesthetic Intensive Care. 2000;28:300–304.

10 Howard SK, Gaba DM, Rosekind MR, Zarcone VP. The risks and implications of excessive daytime sleepiness in resident physicians. Acad Med. 2002;77:1019–1025.

11 American Society of Anesthesiologists Task Force on Management of the Difficult Airway. Practice guidelines for management of the difficult airway: an updated report by the American Society of Anesthesiologists Task Force on Management of the Difficult Airway. Anesthesiology. 2003;98:1269–1277.

12 Henderson JJ, Popat MT, Latto IP, Pearce AC. Difficult Airway Society guidelines for management of the unanticipated difficult intubation. Anaesthesia. 2004;59:675–694.

13 Helmreich RL, Merritt AL. Culture at work in aviation and medicine. Aldershot: Ashgate; 1998.

14 Hales BM, Pronovost PJ. The checklist – a tool for error management and performance improvement. J Crit Care. 2006;21:231–235.

15 Bourne LE, Yaroush RA. Stress and cognition: a cognitive psychological perspective. NASA. 2003:1–121.

16 Picard J, Meek T. Lipid emulsion to treat overdose of local anaesthetic: the gift of the glob. Anaesthesia. 2006;61:107–109. (editorial)

17 http://medical.mhaus.org. Accessed 3 December 2008

18 Ralston M, Hazinski MF, Zaritsky AL, Schexnayder SM, Kleinman ME, eds. Pediatric advanced life support professional course guide, part 5 resuscitation team concept. Dallas: American Heart Association, 2006.