Cardiopulmonary Resuscitation

Published on 27/02/2015 by admin

Filed under Anesthesiology

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1975 times

CHAPTER 38 Cardiopulmonary Resuscitation

Contents

In the late 1950s, children suffering cardiac arrest during anesthesia received 1.5 minutes of knee-to-chest “artificial respiration” followed by a thoracotomy for internal cardiac massage (Rainer, 1957). In 1958, closed-chest compressions were successfully performed on a 2-year-old child (Sladen, 1984). The resuscitation of that child, along with several successful resuscitations of subsequent patients (many undergoing anesthesia) led to reporting of closed-chest compressions for cardiac resuscitation (Kouwenhoven et al., 1960). Currently, 50% to 60% of children who have perioperative cardiac arrest are successfully resuscitated (Bhananker et al., 2007). Despite the success rate of resuscitation during anesthesia, the potential for disaster and the increased likelihood of cardiac arrests in younger children and infants require that pediatric anesthesiologists have a complete understanding of the physiology and pharmacology of cardiopulmonary resuscitation (CPR). “No more depressing shadow can darken an operating room than that occasioned by the death of a child” (Leigh and Belton, 1949).

Cardiac arrest during anesthesia

Incidence of Cardiac Arrest During Anesthesia

Perioperative cardiac arrest generally refers to an event that requires chest compressions while a patient is under an anesthesiologist’s care during either the intraoperative or immediate postoperative period. Cardiac arrest may be the result of factors related to anesthesia, surgical procedure, or patient comorbidities. When comparing reports of anesthesia-related cardiac arrest, definitions and timeframes vary, with some including only the intraoperative period and others including the time from premedication through 24 hours postoperatively or longer. Some studies are based on electronic databases, and others depend on voluntary reporting to registries. The inclusion of events occurring during cardiac surgery by some studies and not others further complicates this comparison.

Results of studies that examined the incidence of pediatric perioperative cardiac arrest for all types of procedures, including cardiac surgery, are listed in Table 38-1. The overall incidence for pediatric perioperative cardiac arrest for all age groups undergoing all types of surgeries ranged from 7.2 to 22.9 per 10,000 procedures (Cohen et al., 1990a; Braz et al., 2006; Flick et al., 2007). Studies that excluded cardiac surgery reported a lower overall incidence, ranging from 2.9 to 7.4 per 10,000 (Murat et al., 2004; Flick et al., 2007; Bharti et al., 2009). When only anesthesia-related cardiac arrest was included, the incidence for all types of surgery (including cardiac) ranged from 0.8 to 4.58 per 10,000. The highest incidence of cardiac arrest was seen in patients undergoing cardiac surgery, ranging from 79 to 127 per 10,000 (Flick et al., 2007; Odegard et al., 2007). This information is helpful when estimating risk, but whatever the risk of pediatric perioperative cardiac arrest, the anesthesiologist must be ready and able to treat the cause and resuscitate the child.

As shown in Table 38-1, risk factors for pediatric perioperative cardiac arrest were consistently found in all studies to be associated with younger patient age. The highest risk was seen in infants younger than 1 month of age, followed by those younger than 1 year old. The Perioperative Cardiac Arrest (POCA) registry compared age groups in anesthesia-related cardiac arrests and found that between 1994 and 1997, 56% of cases were infants younger than 1 year old, whereas between 1998 and 2004, only 38% of the cases were infants younger than 1 year old. This significant decrease in the percentage of cardiac arrest in infants is attributed to the declining use of halothane and increasing use of sevoflurane, which is associated with less bradycardia and myocardial depression (Bhananker et al., 2007). Anesthesia-related cardiac arrest is reported to be higher overall for children (1.4 to 4.6 per 10,000) than adults (0.5 to 1 per 10,000), although the incidence in some studies is similar, presumably because both groups have high-risk patients at the extremes of age (Zuercher and Ummenhofer, 2008).

The patient’s physical condition impacts cardiac arrest risk. Risk significantly increases when American Society of Anesthesiology (ASA) physical status (PS) is 3 or higher (Morray et al., 2000; Murat et al., 2004; Braz et al., 2006; Bhananker et al., 2007; Flick et al., 2007). Patients at ASA PS 5 are often not included in reports of anesthesia-related events, because by definition they have a low likelihood of survival, making it difficult to determine whether events are a result of their condition or related to anesthesia. Patients with ASA PS 4 and 5 have a 30 to 300 times greater risk of cardiac arrest than patients with ASA PS 1 or 2 (Rackow et al., 1961; Newland et al., 2002). Prematurity, congenital heart disease, and congenital defects are common pediatric comorbidities that increase the risk for children (Morray et al., 2000; Bhananker et al., 2007; Odegard et al., 2007).

The designation of emergency status to a patient’s procedure was a risk factor for both cardiac arrest and mortality in some studies but not in others. Emergency surgery was associated with a significantly increased incidence of perioperative cardiac arrest, with 123 per 10,000 anesthesia procedures vs. 15 to 16 per 10,000 for nonemergent cases (p < 0.05) (Braz et al., 2006). In addition to a higher incidence of arrests during an emergency procedure, a poorer outcome was also reported (Vacanti et al., 1970; Marx et al., 1973; Olsson and Hallen, 1988; Morray et al., 2000; Biboulet et al., 2001; Newland et al., 2002; Sprung et al., 2003; Bharti et al., 2009). In contrast, several studies did not find a statistically significant trend to decreased survival as a result of emergency status (Biboulet et al., 2001; Flick et al., 2007; Zuercher and Ummenhofer, 2008). It is not clear whether emergency procedures have increased perioperative risk because of the patient’s condition, the lack of optimal personnel, or both.

Etiology of Cardiac Arrest During Anesthesia

Causes of cardiac arrest during anesthesia are typically grouped either by organ systems involved or interventions applied. A summary of the etiologies and timing of cardiac arrest during anesthesia as reported in the literature is listed in Table 38-2. The pediatric POCA registry uses a classification system that involved both interventions and organ systems, thus grouping cardiac arrests as being related to medication, cardiovascular factors, respiratory factors, or equipment (Morray et al., 2000; Odegard et al., 2007). Some etiologies may be difficult to classify because they fit into several grouping schemes. For example, succinylcholine-induced dysrhythmia may be classified as either a medication-related or a cardiovascular cause of cardiac arrest. A set of guidelines for reporting cardiac arrest data in children, known as the pediatric Utstein guidelines, suggested a classification based on organ systems for etiologies (Zaritsky et al., 1995). The Utstein guidelines used three groups consisting of cardiac, pulmonary, and cardiopulmonary factors for comparison of etiologies of cardiac arrest in children. The Utstein guidelines have not yet been widely incorporated into anesthesia-related cardiac arrest literature. The anesthesia literature generally groups the etiology of cardiac arrest into those related to medication, cardiovascular, or respiratory categories, as shown in Box 38-1.

Previously, medication-related etiologies were the most common reasons for cardiac arrest related to anesthesia in children, representing approximately 35% of cardiac arrests (range of 4% to 54%) (Rackow et al., 1961; Salem et al., 1975; Keenan and Boyan, 1985; Olsson and Hallen, 1988; Morgan et al., 1993; Morray et al., 2000, Biboulet et al., 2001; Newland et al., 2002; Kawashima et al., 2003; Sprung et al., 2003). There has been a decrease in reports of medication-related etiologies to between 18% and 28%, and cardiac and respiratory causes are now the most commonly reported (Fig. 38-1) (Braz et al., 2006; Bhananker et al., 2007). This may be the result of a decrease in incidence of inhalation-agent overdose when use of sevoflurane replaced halothane for anesthetic induction. It is not clear whether sevoflurane is less cardiotoxic than halothane or the delivered dose of sevoflurane is lower because of vaporizer limits relative to a higher minimum alveolar concentration (MAC) for sevoflurane. Similarly, a decrease in succinylcholine-induced dysrhythmias was reported after a warning was issued related to use of succinylcholine in children. Other medication-related causes of cardiac arrest include those associated with regional anesthesia: intravenous (IV) administration of local anesthetic intended for caudal space, high spinal anesthesia, and local anesthesia toxicity. Inadequate reversal of a paralytic agent and opioid-induced respiratory depression are medication-related causes of cardiac arrest that more often present in the postoperative period.

image

FIGURE 38-1 Causes of anesthesia-related cardiac arrest in 1998 through 2004 compared with 1994 through 1997. Multiple and miscellaneous other causes (3% from 1998 to 2004 vs. 4% from 1994 to 1997) not shown.** P < 0.01, 1998 to 2004 vs. 1994 to 1997 by Z test.

(Data for 1994 to 1997 from Morray et al: Anesthesia-related cardiac arrest in children: initial findings of the Pediatric Perioperative Cardiac Arrest [POCA] Registry, Anesthesiology 93:6, 2000.)

Cardiovascular-related causes of cardiac arrest now represent approximately over 40% of cardiac arrests related to anesthesia in children (Braz et al., 2006; Bhananker et al., 2007). Cardiac arrests caused by decreased intravascular volume are most commonly reported in this group, and causes include inadequate volume administration, excessive hemorrhage, and inappropriate volume or transfusion administration (Braz et al., 2006; Bhananker et al., 2007; Flick et al., 2007). Dysrhythmias caused by hyperkalemia are seen with succinylcholine administration, transfusion, reperfusion, myopathy, or renal insufficiency (Larach et al., 1997). Dysrhythmia or cardiovascular collapse (asystole) may have a vagal etiology as a result of traction, pressures, or insufflations of the abdomen, eyes, neck, or heart. Cardiovascular collapse can occur with anaphylaxis from exposure to latex, contrast, drugs, or dextran. Venous air embolism is another important cause of cardiovascular collapse and cardiac arrest in patients who are under anesthesia. Malignant hyperthermia is a seldom-reported cause of cardiac arrest in this group.

Respiratory-related causes are responsible for approximately 31% (range of 15% to 71%) of cardiac arrest related to anesthesia in children and adults (Rackow et al., 1961; Salem et al., 1975; Keenan and Boyan, 1985; Olsson and Hallen, 1988; Morgan et al., 1993; Morray et al., 2000; Biboulet et al., 2001; Newland et al., 2002; Kawashima et al., 2003; Sprung et al., 2003; Braz et al., 2006; Bhananker et al., 2007; Flick et al., 2007). Respiratory-related events as the primary cause of cardiac arrest have declined over the years as a source of malpractice claims, from 51% in the 1970s to 41% in the 1980s and 23% from 1990 through 2000 (Jimenez et al., 2007). Inadequate ventilation and oxygenation are broad categories often listed in this group as causes of cardiac arrest. “Loss of the airway” may involve laryngospasm or bronchospasm; an anatomy that is difficult to manage; or a misplaced, kinked, plugged, or inadvertently removed endotracheal tube (ETT). Aspiration remains a cause of respiratory-related cardiac arrest but is not often mentioned in the recent literature.

Equipment-related causes involve approximately 4% (range of 0% to 20%) of cardiac arrest related to anesthesia in children and adults (Rackow et al., 1961; Salem et al., 1975; Keenan and Boyan, 1985; Olsson and Hallen, 1988; Morgan et al., 1993; Morray et al., 2000; Biboulet et al., 2001; Newland et al., 2002; Kawashima et al., 2003; Sprung et al., 2003). Categories of equipment-related cardiac arrest most commonly described include central-venous-catheter–induced bleeding, dysrhythmias, and breathing circuit disconnection. Other etiology groups of cardiac arrest reported in some studies include multiple events (3%), inadequate vigilance (6%), or an unclear etiology (9%, range of 1% to 18%) (Olsson and Hallen, 1988; Morray et al., 2000; Biboulet et al., 2001; Kawashima et al., 2003).

Determination that a cardiac arrest is anesthesia related is subjective, as is the extent that a cardiac arrest is related to anesthesia care. Patient-related factors, procedure-related factors, and anesthesia care-related factors are the three most important determinants of etiology of operating-room cardiac arrests. Attempts to determine extent of contribution of anesthesia care in cardiac arrest has produced terms such as anesthesia-associated and anesthesia-attributable cardiac arrest. Determination of an anesthesia-related contribution is complicated by the contribution of patient- and procedure-related factors. To what extent does anesthesia care contribute to a cardiac arrest related to surgical bleeding in a patient with a coagulopathic condition? Is failing to keep up with major hemorrhage or to correct a coagulopathy related to the procedure, to the patient, or to the anesthesia care? Many studies simply use the term anesthesia- related to describe a cardiac arrest after an anesthesiologist has been involved in care of the patient.

Anesthesia-related cardiac arrest may be preventable 53% of the time, and anesthesia-related mortality is preventable 22% of the time (Kawashima et al., 2003). Human error may be the most important factor in deaths attributable to anesthesia and usually manifests not as a fundamental ignorance but as a failure in application of existing knowledge (Olsson and Hallen, 1988). Poor preoperative preparation and inadequate vigilance are often reported as avoidable errors. Examples of poor preoperative preparation relevant to the pediatric anesthesiologist include failure to identify patients with symptoms of an undiagnosed skeletal myopathy, coronary involvement from Williams syndrome, prolonged QT syndrome, or a cardiomyopathy. Another category of preventable causes is inadequate vigilance, such as failure to recognize progressive bradycardia and failure to respond to persistent hypotension. In addition to improving preparation and vigilance, the use of “test doses” or divided dosing when administering medications (especially drugs that may cause hypotension in unstable patients) is suggested to minimize medication errors. Other important and preventable causes of anesthesia-related cardiac arrest include transfusion-related hyperkalemia, local anesthetic toxicity, and inhalation-anesthetic overdose (Morray et al., 2000).

Cardiac arrest that is not related to anesthesia is most often the result of the patient’s underlying condition or the procedure being performed. Trauma, exsanguination, and failure to wean from cardiopulmonary bypass (CPB) are three of the most commonly reported causes of cardiac arrest that are not anesthesia related. Myocardial infarction, pulmonary embolus, sepsis, and ruptured aneurysm are other, less often observed, patient-related causes of cardiac arrest. Procedure-related causes include technical problems, caval compression, vagal asystole related to traction or insufflation, and complications related to transplantation.

Outcomes of Cardiac Arrest During Anesthesia

What is the risk of a child dying during the perioperative period? Studies that have investigated this question have reported varied results, depending on whether they include only anesthesia-related causes or all causes of cardiac arrest. Although survival is the outcome most commonly viewed as a measure of successful resuscitation after cardiac arrest, mortality is the rate most commonly reported. Anesthesia-related mortality is currently reported to be 0.1 to 1.6 per 10,000 cases, which is down from 2.9 per 10,000 cases between 1947 and 1958 (Rackow et al., 1961; Morita et al., 2001; Morray et al., 2000; Flick et al., 2007). Some studies have even reported no anesthesia-related deaths (Tay et al., 2001; Murat et al., 2004; Braz et al., 2006). When all causes of perioperative cardiac arrest are included (i.e., anesthesia-related, surgical, and patient disease), risk of mortality is higher, ranging from 3.8 to 9.8 per 10,000 cases (Cohen et al., 1990a; Morita et al., 2001; Braz et al., 2006; Flick et al., 2007). Compared with neonates and infants, older children had a lower incidence of both cardiac arrest and mortality.

Although survival is used to describe a positive outcome for a patient who suffers a cardiac arrest, it is imprecise as to duration or quality of patient outcome. A patient may survive initial resuscitation attempts but subsequently die in the intensive care unit (ICU) from persistent hemodynamic instability or devastating neurologic injury. Initial survival from cardiac arrest after successful resuscitation efforts is defined as return of spontaneous circulation (ROSC), meaning that native heartbeat and blood pressure are adequate for at least 20 minutes. Although ROSC indicates a successful reversal of cardiac arrest, it may not be a meaningful indicator if many patients subsequently die in the ICU. The number of patients with ROSC after cardiac arrest is usually much greater than the number that has a longer, more meaningful, period of survival, such as survival to discharge from the hospital. Although survival to discharge indicates a longer survival than ROSC, surviving for a longer time does not address the quality of that outcome. An assessment of the quality of survival should acknowledge either the presence of a new neurologic deficit or a return to the patient’s neurologic baseline. These terms are found in some descriptions in the anesthesia-related literature on outcomes of children who suffer cardiac arrest. Full recovery after intraoperative cardiac arrest in children is reported to range from 48% to 61% (Bharti et al., 2009; Bhananker et al., 2007; Flick et al., 2007).

It is often presumed that the duration and quality of survival from a cardiac arrest that occurred in the operating room should be good, because personnel who witness the cardiac arrest and provide resuscitation are trained and prepared. A review of the anesthesia literature reveals that cardiac arrest can be reversed in over 80% of anesthesia-related episodes (Sprung et al., 2003; Bhananker et al., 2007; Bharti et al., 2009). The likelihood of ROSC decreases to 50% or 60% if the cause of arrest includes those causes not related to anesthesia. Survival to hospital discharge after an anesthesia-related cardiac arrest appears to be approximately 65% to 68% (the range for pediatric studies of this variable is large). Survival to discharge is 30% if causes of cardiac arrest unrelated to anesthesia are included. Comparing these data with data in literature not related to anesthesia reveals that studies of in-hospital cardiac arrest (IHCA) in children show a 23% rate of survival to discharge (range of 8% to 42%) (Gillis et al., 1986; Von Seggern et al., 1986; Davies et al., 1987; Carpenter and Stenmark, 1997; Parra et al., 2000; Suominen et al., 2000; Reis et al., 2002; Nadkarni et al., 2006, Tibballs and Kinney, 2006). This 23% survival-to-discharge rate is comparable with the 30% rate for all causes and much lower than the 65% rate for anesthesia-related causes of cardiac arrest in the operating room. The presence of anesthesiologists may account, in part, for the better survival outcomes in anesthesia-related cardiac arrests.

Outcome studies for cardiac arrest should include a determination of the presence of new neurologic injuries. Pediatric studies of IHCA show a 71% favorable neurologic outcome for the survivors (range of 45% to 90%) (Gillis et al., 1986; Davies et al., 1987; Carpenter and Stenmark, 1997; Parra et al., 2000; Suominen et al., 2000; Reis et al., 2002). Compilation of the available anesthesia-related literature indicates that 57% of children who suffer perioperative cardiac arrest survive and return to their baseline neurologic status, whereas 5% survive with a new neurologic deficit. Thus, for anesthetic-related cardiac arrest, a child has a 62% chance of surviving, and survivors have a 92% chance of having a favorable neurologic outcome. This percentage for pediatric survivors falls to 22% for those who return to neurologic baseline out of a rate of 36% for total survivors, or a 61% favorable neurologic outcome when all causes of cardiac arrest are included. The 71% favorable neurologic outcome for IHCA is comparable with the 61% rate for all causes and lower than the 92% rate for anesthesia-related causes of cardiac arrest in the operating room. It is noteworthy to mention that the number of studies and patients for these estimates are small and the ranges are large. These data indicate that both the duration and quality of survival are favorable for children who experience cardiac arrest from anesthesia-related causes.

There are many potential explanations for a higher resuscitation rate from anesthesia-related cardiac arrest. Factors such as the resuscitation skills of the anesthesiologist, preparation for emergencies by the anesthesiologist, reversible causes of cardiac arrest in the operating room, and increased monitoring during anesthesia to provide early recognition of problems may contribute to improved resuscitation rates during anesthesia care. The survival rate after cardiac arrest is affected by many factors, some of which are the same that predispose a patient to cardiac arrest: age of patient, ASA PS, and emergency procedures.

The etiology of cardiac arrest also impacts likelihood of successful resuscitation and survival. Mortality is increased if the cause of cardiac arrest is hemorrhage or is associated with protracted hypotension (both have a P < 0.001) (Girardi and Barie, 1995; Newland et al., 2002; Sprung et al., 2003). Resuscitation-related factors have an effect on outcome. These factors include cardiac rhythm during resuscitation, duration of resuscitation, and duration of no-flow and low-flow states during cardiac arrest and resuscitation. A no-flow state occurs when a patient is in cardiac arrest before receiving resuscitation efforts. A low-flow state occurs when a patient is in cardiac arrest and receiving resuscitation that is unable to provide adequate circulation. The longer the patient is in a no-flow or low-flow state, the worse the outcome is likely to be.

Asystole is a rhythm that, if present during resuscitation, has been associated with a decreased rate of both ROSC and survival to discharge for children with cardiac arrest outside of the operating room. Usually asystole is caused by prolonged hypoxia or myocardial ischemia and represents a terminal rhythm. Prolonged hypoxia causes the myocardium to be more resistant to resuscitation efforts and is more likely to result in neurologic injury. Thus, if the heart can be resuscitated, there is still the possibility of a poor outcome. In the operating room, continuous patient monitoring decreases the risk of prolonged periods of hypoxia or ischemia. Instead of asystole being a terminal rhythm, asystole in the operating room is often an initial rhythm that results from a vagal stimulation. As an initial rhythm, asystole is more likely to be reversed. Usually discontinuation of the vagal stimulus and chemical support of the heart rate are effective resuscitation measures. Unlike with cardiac arrests that occur outside the operating room, asystole is a commonly reported rhythm with anesthesia-related cardiac arrest and is associated with a good prognosis (Sprung et al., 2003).

The duration of the resuscitative efforts has an effect on patient outcome. Prolonged duration of CPR increases the possibility of low-flow intervals, thereby resulting in myocardial and cerebral injury. The need for CPR for more than 15 minutes has been determined to be a predictor of mortality in anesthesia-related cardiac arrests (P < 0.001) (Girardi and Barie, 1995). The interpretation of these data is complicated by reports of successful outcomes even after prolonged periods of resuscitation efforts. Up to 3 hours of CPR has been reported in anesthetic-related cardiac arrests, with eventual resuscitation and a good outcome (Cleveland, 1971; Lee et al., 1994). In summary, the cause of the cardiac arrest, the rhythm disturbance, and the duration of CPR can impact outcome from cardiac arrest that takes place in the operating room.

Cardiopulmonary resuscitation

Recognition of the Need for Cardiopulmonary Resuscitation

Early recognition that a child’s vital signs are inadequate and a response with rapid initiation of CPR reduces potential for injury from low-flow or no-flow intervals. It is difficult to give guidelines for the limit of each vital sign at which vital organ blood perfusion becomes inadequate for each child under anesthesia (Table 38-3). These limits depend on many factors, including the patient’s general health, the patient’s age, the type and depth of anesthesia, and the intensity and duration of deterioration of the vital signs. Pediatric training and experience are valuable in these uncommon but critical situations to help with the decision about when to initiate CPR.

In general, CPR including chest compressions should be initiated when it is felt that perfusion is inadequate to deliver oxygen, substrates, or resuscitative medications to the heart or brain. Extensive monitoring and continuous presence of anesthesia personnel should be optimal for early detection of inadequate perfusion or ventilation in the operating room. In the absence of adequate monitoring, health care personnel should palpate the umbilical artery in the newborn, the brachial artery in the infant, and the carotid artery in the child to detect an abnormal heart rate (Cavallaro and Melker, 1983; Lee and Bullock, 1991; AHA, 2006a). The analysis of a pulse in anesthetized and slightly hypotensive (systolic pressure lower than 70 mm Hg) infants revealed that detection of a pulse within 10 seconds was best with auscultation; brachial palpation was less successful than auscultation but better than carotid or femoral palpation by operating room nurses (Inagawa et al., 2003). Femoral palpation of pulse was more successful than carotid or brachial in anesthetized and hypotensive (systolic pressure lower than 70 mm Hg) infants in a subsequent study with personnel who had more pediatric resuscitation training (Sarti et al., 2006). Both authors agree that successful counting of heart rate over a brief time was better with auscultation.

In the operating room, monitoring is usually available to help determine vital signs of an anesthetized child. When the monitoring is unavailable or the readings are in question, having one rescuer auscultate and another palpate may increase reliability and decrease time needed to count a heart rate and determine the palpability of a pulse. Whereas either unresponsiveness or apnea is an indication to resuscitate in most situations, the administration of anesthesia masks these signs, and bradycardia by auscultation or lack of pulse by palpation may be valid indicators to start CPR.

Physiology of Cardiopulmonary Resuscitation: Reestablishment of Ventilation

The fraction of inspired oxygen (Fio2) that should be administered during CPR is important, because either too much or too little may be detrimental. A report by Elam et al. (1954) showed that exhaled air from the rescuer (16% oxygen) provided adequate oxygenation of the victim (arterial oxygen level [Sao2] of 90% or greater) and became the basis for ventilation during CPR when supplemental oxygen is not available. In the operating room, the anesthesiologist has the ability to administer 100% oxygen via tracheal intubation during CPR. The anesthesiologist is faced with the theoretic concern that delivery of high oxygen levels during reperfusion may increase formation of oxygen free radicals and increase cellular injury. This concern is weighed against the knowledge that CPR is less effective in restoring oxygen delivery to the brain and heart than is native circulation and that during CPR the administration of low levels of oxygen may increase the delay in restoration of oxygen delivery. Adequacy of oxygen delivery during CPR depends on many variables, including the cause of cardiac arrest, the length of decreased perfusion, the effectiveness of CPR, and the patient’s metabolic demands. The complexity of the determination makes it unlikely that oxygen delivery during CPR can be measured or predicted. A review of newborn resuscitation using 21% or 100% Fio2 found that newborns with depressed (but not arrested) cardiac function can be effectively resuscitated with either 21% or 100% oxygen and that 21% oxygen administration is associated with less markers of oxidative stress. This review also found that for cardiac arrest in newborns there is no evidence that 21% is as effective as 100% oxygen in resuscitation of circulation, and that animal studies suggest 100% oxygen administration is more effective (Ten and Matsiukevich, 2009). A model of brain-tissue oxygen monitoring in piglets during CPR for cardiac arrest showed that despite administration of 100% Fio2, the brain-tissue oxygen levels remained either at or below the levels before cardiac arrest until after ROSC, when they became dramatically elevated (Cavus et al., 2006). This finding implies that maximal oxygen administration is needed during CPR, but that it can create hyperoxic conditions after ROSC. Without adequate data to resolve this question, it seems reasonable to continue to use 100% Fio2 during CPR for intraoperative cardiac arrest to help maximize the oxygen delivery during this low flow-state but to reduce oxygen levels once reliable oxygen monitoring shows adequate oxygenation during the hyperdynamic phase that occurs after ROSC (see Postresuscitation Care). The exception to the use of 100% O2 for resuscitation may be the child with a circulatory condition such as a hypoplastic left heart, whose poor systemic perfusion is the result of pulmonary overcirculation. In such a case, the anesthesiologist needs to decide whether high levels of oxygen administration would contribute to the poor systemic circulation.

The contribution of chest compressions to ventilation during CPR impacts the decision of how much ventilation to provide to victims of cardiac arrest. Early in the study of external compressions, researchers did not add ventilation during CPR because they believed that closed-chest compression alone provided adequate ventilation (Kouwenhoven et al., 1960). The findings, that chest compressions alone provide some ventilation for adult victims and that minimal ventilation is necessary shortly after a sudden fibrillatory arrest, have resulted in over-the-phone instruction for CPR with compressions alone to untrained bystanders or those unwilling to provide mouth-to-mouth ventilation. It is difficult to determine how much chest compressions contribute to ventilation; their adequacy may vary with the cause of cardiac arrest, duration of cardiac arrest, the child’s age, an underlying medical condition, the efficacy of resuscitation, and the child’s metabolic needs. Requirements to administer oxygen and remove carbon dioxide (CO2) differ by type of cardiac arrest; a sudden fibrillatory arrest has little loss of oxygen reserve or accumulation of CO2, and a gradual asphyxial cardiac arrest has greatly depleted oxygen reserve and large accumulation of CO2. Asphyxial cardiac arrest derives a greater benefit from ventilation efforts. A model of asphyxial arrest in piglets shows greatest benefit with delivery of both compressions and ventilations compared with compression or ventilation alone (Berg et al., 2000). Provision of ventilation early in resuscitation from cardiac arrest may be less necessary and has the potential to cause a respiratory alkalosis, resulting in unwanted effects on brain circulation and oxygen delivery. As the duration of cardiac arrest continues, despite CPR efforts, metabolic acidosis predominates and respiratory compensation may be difficult. Lack of data usually leads the pediatric anesthesiologist to choose a rate based on recommendations for age (10 ventilations per minute in children and 30 ventilations per minute for newborns) and to adjust the rate if blood-gas analysis becomes available during resuscitation (Table 38-4).

Intubation of the trachea by the anesthesiologist is recommended for the management of ventilation during intraoperative cardiac arrest. Without intubation and positive pressure ventilation, soft-tissue obstruction may prevent adequate ventilation in some victims (Safar et al., 1961). An unprotected airway puts patients at greater risk for aspiration during CPR because of loss of the airway’s protective reflexes and increased likelihood of stomach distention with positive pressure ventilation. At onset of cardiac arrest, the lower esophageal sphincter competency falls from approximately 20 cm H2O to 5 cm H2O (Gabrielli et al., 2005). The laryngeal mask airway (LMA) compares favorably with mouth-to mouth ventilation, mask ventilation, and other airway adjuncts during CPR, but there are limited data for a comparison with tracheal intubation during CPR and non-intubation technique may be less protective of gastric distention or aspiration (Samarkandi et al., 1994; Rumball and MacDonald, 1997; Stone et al., 1998). Airway adjuncts are not recommended as a replacement for tracheal intubation during CPR in children, especially when an anesthesiologist is available (Grayling et al., 2002). Tracheal intubation is optimal to assure ventilation during CPR for pediatric anesthesiologists, because they maintain training to use this procedure.

The appropriate placement of the ETT during cardiac arrest can be verified in most instances by the presence of end-tidal CO2 (Etco2). The incidence of accidentally placing an ETT in the esophagus of a child is greater during cardiac arrest (19% to 26%) than during an intubation that is not involved with cardiac arrest (3%) (Bhende and Thomson, 1992; Bhende and Thomson, 1995). Demonstration of persistent Etco2 wave forms after intubation is extremely reliable to confirm correct placement of the ETT in children with spontaneous circulation (Bhende et al., 1992). The lack of a measurable Etco2 level in the ETT usually indicates esophageal intubation. In resuscitation from a cardiac arrest, the pulmonary blood flow is decreased during CPR and the Etco2 level may be falsely low or absent despite a correctly placed ETT. This finding of no Etco2 detected during CPR in children experiencing cardiac arrest was seen in 14% to 15% of correctly placed ETTs (Bhende et al., 1992; Bhende and Thomson, 1995). Continually detectable Etco2 is proof of tracheal intubation even during cardiac arrest. The absence of Etco2 on placement of the ETT indicates that the larynx should be visually inspected to discriminate esophageal intubation. Loss of Etco2 during resuscitation efforts may indicate the ETT is dislodged and should be reinspected or replaced, that the ETT is plugged or kinked and a suction catheter should be passed, or that pulmonary blood flow is diminished and resuscitation efforts need to be increased. Tracheal intubation for resuscitation also offers the option of access (although limited) to the circulation for drug administration.

Interruption of chest compressions for delivery of ventilation increases the percentage of time that there is an absence of perfusion to vital organs; this percentage of CPR without perfusion is referred to as the no-flow fraction (NFF). In addition to producing times with no perfusion, interruptions in the delivery of compressions result in a pooling of blood in the vasculature that causes the need for several compressions to be delivered before perfusion is back to the preinterruption level (Berg et al., 2001). Thus, there are both no-flow and low-flow problems caused by pausing compressions for ventilation or any other reason. The presence of an ETT during CPR eliminates concern for ventilation attempts contributing to the NFF. During CPR performed by bystanders compressions are held, ventilations are delivered, and then compressions are resumed. These pauses in chest compressions make it easier for ventilation provided by mouth-to-mouth or bag-mask ventilation to be delivered to the lungs, thereby improving the patient’s ventilation and reducing the probability of gastric inflation. The need to interpose ventilations, thus interrupting compressions, during CPR is eliminated by placement of an ETT. A significant amount of research compares the effects of chest compression with ventilation ratios of 15:2, 30:2, and longer (continuous compressions) with varying results for fibrillatory and asphyxial cardiac arrest in prehospital settings. These ratios become irrelevant to the anesthesiologist when an ETT is placed, and compressions can be performed without interruptions for ventilation in a 10:1 ratio, generating 100 compressions and 10 ventilations per minute. The goal for the anesthesiologist is to maintain continuous delivery of compressions with interruption only at the 2-minute intervals necessary for switching of compression providers to prevent fatigue, pulse checks to determine ROSC, and when needed, the delivery of shocks. Intubation, central line placement, and placement of adhesive pads for defibrillation are other commonly reported interruptions to chest compressions and should be minimized and compressions should be continued when possible. It is important to remember the negative impact of holding compressions during intubation attempts and to absolutely minimize the duration of procedures that require these interruptions.

It is important to understand the effect of positive-pressure ventilation on perfusion produced by chest compression. In the previous section, the importance of minimizing NFF by maintaining compressions and not interrupting for ventilations was discussed. There are other physiologic interactions that cause ventilation to influence the effectiveness of chest compressions. Factors affected by these interactions include increased intrathoracic pressure, affecting the ability of chest compressions to move blood out of the thorax; intracranial pressure (ICP), reducing perfusion of the brain; myocardial perfusion pressure (MPP), and venous return to the thorax.

A comparison of different methods of delivery of ventilation during chest compressions revealed differences in oxygenation, ventilation, and hemodynamics (Wilder et al., 1963). Delivery of ventilations independent of compressions, interposed between compressions, and synchronized with compressions allows both adequate oxygenation and ventilation, but their effects on hemodynamic pressures vary. Delivery of positive pressure ventilation has an impact on the hemodynamic variables caused by changes in intrathoracic pressure. CPR with simultaneous compression and ventilation increases intrathoracic pressure at the time of compression and yields improvement in blood flow and survival in a canine model, but it has not shown the same benefit in humans. The simultaneous increase in intrathoracic pressure may lead to increased ejection of blood from the thorax, but elevation of intrathoracic pressure also leads to increases in intracardiac and ICP. Increasing intracardiac pressure at the time of compression may result in no change in the MPP and no overall benefit to the heart. Increases in ICP occur with increases in intrathoracic pressure and may result in no change in the cerebral perfusion pressure (CPP) and no overall benefit to the brain (see section that follows, Physiology of Cardiopulmonary Resucitation: Reestablishment of Circulation, for mechanism). Increasing intrathoracic pressure during the relaxation phase of chest compressions has the potential to decrease venous return and may have significant impact on the effectiveness of subsequent compressions, depending on the duration of ventilation pressure. Attention to rate, duration, and pressure used during delivery of ventilations can prevent excessive ventilation that is common during these high-stress events and the impact overventilation has on venous return. Use of the impedance threshold device, the intrathoracic pressure regulator (ITPR), and decompression during CPR are techniques used to increase venous return by lowering intrathoracic pressure and are discussed in later sections.

Overventilation or underventilation can be detrimental during CPR. As discussed previously, overventilation can have hemodynamic effects or result in hypocarbia; either of these could result in decreased perfusion of the brain. Underventilation could result in a decrease in perfusion either from reduced pulmonic blood flow during CPR secondary to the increased vascular resistance that results from atelectasis or from the systemic effects of hypercarbia in addition to metabolic acidosis. The determination of a ventilation rate during CPR depends on the age of the child, whether the airway is secured, the number of rescuers, the type of cardiac arrest, and duration of the cardiac arrest. The young child has an increased baseline metabolic activity and a greater need for an increase in the number of ventilations during CPR. Recommendations for newborns include rates of about 30 breaths per minute, whether there are 1 or 2 rescuers and whether or not the child is intubated. The infant, the child between 1 and 8 years old, the child older than 8 years, and the adult share recommendations for 8 to 10 breaths per minute with intubation (Table 38-4). The newborn has both the highest metabolic activity and baseline CO2 production and a greater chance of having a cardiac arrest with a prolonged ischemic period, resulting in a greater need to eliminate CO2. There may be an ideal range for ventilation during CPR; overventilation may increase intrathoracic pressure (causing reduced venous return and increased ICP) and lower arterial carbon dioxide tension (Paco2). Causing cerebral vasoconstriction while under ventilation may allow lung collapse and atelectasis, reducing pulmonary blood flow, MPP, and CPP. The decrease in pulmonary blood flow during CPR for cardiac arrest produces higher levels of venous CO2 and lower levels of arterial and Etco2. Determining the adequacy of ventilation efforts during CPR is difficult, because low pulmonary blood flow impacts the CO2 levels of both Etco2 and blood-gas monitoring. These techniques regain their usefulness in monitoring ventilation efforts as pulmonary blood is improved with resuscitation or ROSC.

The anesthesiologist will encounter the decision of whether to use mechanical or manual ventilation during CPR for intraoperative cardiac arrest. There are no data available to use to recommend one technique or the other.

Physiology of Cardiopulmonary Resuscitation: Reestablishment of Circulation

Mechanisms of Blood Flow During Cardiopulmonary Resuscitation

Kouwenhoven et al. (1960) proposed that external chest compressions squeeze the heart between the sternum and the vertebral column, forcing blood to be ejected. This assumption about direct cardiac compression during external CPR became known as the cardiac-pump mechanism of blood flow. The cardiac pump mechanism proposes that the atrioventricular (AV) valves close during ventricular compression and that ventricular volume decreases during ejection of blood. During chest relaxation, ventricular pressures fall below atrial pressures, enabling the AV valves to open and the ventricles to fill. This sequence of events resembles the normal cardiac cycle and occurs with use of direct cardiac compression during open-chest CPR.

Several observations of hemodynamics during external CPR are inconsistent with the cardiac pump mechanism for blood flow (Table 38-5). First, similar elevations in arterial and venous intrathoracic pressures during closed-chest CPR suggest a generalized increase in intrathoracic pressure (Weale and Rothwell-Jackson, 1962). Second, reconstructing thoracic integrity in patients with flail sternums improves blood pressure during CPR (unexpected, because a flail sternum should allow direct cardiac compression during closed-chest CPR) (Rudikoff et al., 1980). Third, patients who develop ventricular fibrillation (VF) produce enough blood flow by repetitive coughing or deep breathing to maintain consciousness; there are examples in which no compression of the heart occurs, only an increase in intrathoracic pressure (MacKenzie et al., 1964; Criley et al., 1976; Niemann et al., 1980; Harada et al., 1991). These observations suggest a generalized increase in intrathoracic pressure may contribute to the production of blood flow during CPR. The finding that changes in intrathoracic pressure without direct cardiac compression (i.e., a cough) produce blood flow epitomizes the thoracic-pump mechanism of blood flow during CPR. Familiarity with the thoracic pump and cardiac pump mechanisms of blood flow during CPR help with understanding of how alternative methods of CPR might be advantageous.

TABLE 38-5 Comparison of Mechanisms of Blood Flow During Closed-Chest Compressions

Proposed Mechanism Cardiac Pump Thoracic Pump
  Sternum and spine compress heart General increase in intrathoracic pressure
Findings During Compression
Atrioventricular valves Close Stay open
Aortic diameter Increases Decreases
Blood movement Left ventricle to aorta Pulmonary veins to aorta
Ventricular volume Decreases Little change
Compression rate Dependent Little effect
Duty cycle Little effect Dependent
Compression force Increases role Decreases role
Patient Physiology
  Small chest Large chest
  High compliance Low compliance

Thoracic-Pump Mechanism

Chest compression during CPR generates almost equal pressures in the left ventricle, aorta, right atrium, pulmonary artery, airway, and esophagus. Because all intrathoracic vascular pressures are equal, the suprathoracic arterial pressures must be greater than the suprathoracic venous pressures for a cerebral perfusion gradient to exist. Venous valves, either functional or anatomic, prevent direct transmission of the rise in intrathoracic pressure to the suprathoracic veins (Niemann et al., 1981; Swenson et al., 1988; Paradis et al., 1989; Chandra et al., 1990; Goetting and Paradis, 1991; Goetting et al., 1991). This unequal transmission of intrathoracic pressure to the suprathoracic vasculature establishes the gradient necessary for cerebral blood flow during closed chest CPR.

During normal cardiac activity, the lowest pressure measurement occurs on the atrial side of the AV valves, providing a downstream effect that allows venous return to the pump. The extrathoracic shift of this low-pressure area to the cephalic side of jugular venous valves during the thoracic pump mechanism implies that the heart is merely serving as part of a conduit for blood flow. Angiographic studies show that during a single chest compression, blood passes from the vena cavae through the right heart to the pulmonary artery and from the pulmonary veins through the left heart to the aorta (Niemann et al., 1981; Cohen et al., 1982). Unlike during normal cardiac activity and open-chest CPR, echocardiographic studies during closed-chest CPR have shown that AV valves remain open during blood ejection (Rich et al., 1981; Werner et al., 1981; Clements et al., 1986). In addition, unlike during native cardiac activity and open-chest CPR, aortic diameter decreases instead of increasing during blood ejection (Niemann et al., 1981; Werner et al., 1981). These findings about closed-chest CPR support the thoracic-pump theory that the chest becomes the “bellows,” producing blood flow during CPR, and that the heart is a passive conduit.

Cardiac-Pump Mechanism

Despite evidence for the importance of the thoracic-pump mechanism of blood flow during external chest compressions, there are specific situations in which the cardiac pump mechanism predominates during closed-chest CPR. First, applying more force during chest compressions (as in high-impulse CPR, see related section) increases the likelihood of direct cardiac compression and closure of AV valves (Feneley et al., 1987; Hackl et al., 1990). Second, a small chest size allows for more direct cardiac compression, causing better hemodynamics during closed-chest CPR in a canine model (Babbs et al., 1982a). Third, the compliant infant chest should permit more direct cardiac compression, as shown in a closed-chest CPR model in piglets, in which excellent blood flows are produced as compared with most adult models (Schleien et al., 1986). Transesophageal echocardiography studies have demonstrated the closing of AV valves during the compression phase of CPR in humans (Higano et al., 1990; Kuhn et al., 1991). These findings support the occurrence of cardiac compression during conventional CPR, suggesting that both mechanisms of blood flow may occur during CPR. As will be seen in a later section, varying the method of CPR may alter the contribution of each mechanism.

Efficacy of Blood Flow During Cardiopulmonary Resuscitation

The level of blood flow to vital organs produced by conventional closed-chest CPR without pharmacologic support (basic life-support models) is disappointingly low. The range of cerebral blood flow in dogs during CPR is 3% to 14% of levels before cardiac arrest (Bircher and Satar, 1981; Koehler et al., 1983; Koehler and Michael, 1985; Luce et al., 1984; Jackson et al., 1984). CPPs are also low, at 4% to 24% of levels before cardiac arrest in animals and only 21 mm Hg in humans (Bircher et al., 1981; Koehler et al., 1983; Luce et al., 1984; Goetting et al., 1991). Myocardial blood flows in this basic CPR mode are also discouragingly low at 1% to 15% of pre–cardiac arrest levels in dogs (Chandra et al., 1981a; Voorhees et al., 1983; Koehler et al., 1985; Halperin et al., 1986a; Shaffner et al., 1990). MPPs correlate with myocardial blood flow in a one-to-one relationship between myocardial blood flow (when measured in mL/min per 100 g) and MPP (mm Hg) (Voorhees et al., 1983; Ralston et al., 1984). Several factors affect cerebral and myocardial blood flow during CPR, and these disappointing results in basic life support models can be improved with addition of pharmacologic support.

Physiologic thresholds for minimal vital organ blood flow during CPR have been described. The inability to maintain blood flow above these thresholds during CPR results in organ malfunction. A myocardial blood flow of 20 mL/min per 100 g or greater is necessary for successful defibrillation in dogs (Guerci et al., 1985; Sanders et al., 1985a). A cerebral blood flow of greater than 15 to 20 mL/min per 100 g is necessary to maintain normal electrical activity during CPR (Michael et al., 1984). Models of basic life support often do not achieve these thresholds; the addition of advanced life support measures, such as epinephrine administration, is associated with blood flow levels above these thresholds.

Maintenance of Circulation During Cardiopulmonary Resuscitation

The goal of CPR is to improve a no-flow or low-flow state by restoring and maintaining the best flow possible to the brain and heart until an adequate spontaneous circulation can be recovered. Factors related to the patient, the ventilation technique, and the compression technique contribute to restoration and maintenance of blood flow during CPR. The pediatric anesthesiologist should understand how these factors affect restoration and maintenance of blood flow during an intraoperative arrest.

Patient-Related Factors

Patient-related factors that influence the effectiveness of CPR to maintain circulation include the victim’s age, the duration of CPR, the duration of preresuscitation ischemia, ICP, and volume status.

Based on limited data, young age appears related to higher cerebral blood flow during closed-chest CPR. A piglet model has substantially higher cerebral blood flow (50% of those before cardiac arrest) and slightly higher myocardial flows (17% of that before cardiac arrest) than those reported for adult models (Schleien et al., 1986). Studies on slightly older pigs yielded opposing results (Brown et al., 1987b; Sharff et al., 1984). The cerebral blood flow in the first of these two studies was markedly higher than that in adult models during closed-chest CPR, and neither of the myocardial flows was different from adult models. No human data exist with blood flows at different ages during CPR.

Age-related physical factors that affect the blood flow produced during CPR include chest wall compliance and chest wall deformability. Chest wall compliance impacts both the ability to produce anteroposterior displacement and to directly compress the heart. Young children have increased chest wall compliance that facilitates the achievement of adequate compression depth and increases the chance of direct cardiac compression, either of which can result in better blood-flow production by chest compressions. These benefits of the more compliant infant chest may account for high flows that resemble those produced by open-chest cardiac massage in a piglet model (Schleien et al., 1986). Chest wall deformability is another factor that relates to the ability to maintain flows during prolonged periods of chest compressions. Chest deformation occurs as CPR becomes prolonged. The chest assumes a flatter shape as compressions continue, producing larger decreases in cross-sectional area at the same displacement. Progressive deformation may be beneficial if it leads to more direct cardiac compression. Unfortunately, too much deformation may result in loss of recoil of the chest wall during release of compression. Decreased chest recoil with progressive deformation limits displacement and produces less effective compression and less venous return during release of compression.

A model of conventional CPR in piglets shows a progressive decrease in the effectiveness of prolonged chest compressions to produce blood flow (Schleien et al., 1986; Dean et al., 1991). The permanent deformation of the chest in this model approaches 30% of original anteroposterior diameter. An attempt to limit deformation by increasing intrathoracic pressure during compression with simultaneous-ventilation CPR resulted in no improvement in either amount of deformation or time to deterioration of flow (Berkowitz et al., 1989). Investigators used a third mode of infant-animal CPR by using a vest to deliver compressions in an attempt to limit production of deformation. The vest distributes compression force diffusely around the thorax and greatly decreases permanent deformation (3% vs. 30%) (Schleien et al., 1986; Shaffner et al., 1990). Unfortunately, the deterioration of blood flow with time still occurs and appears to be unrelated to the amount of deformation in this model and is more likely related to the duration of prolonged CPR. There has not been a direct comparison of adult and pediatric CPR in humans. The increased compliance and deformability of the infant’s chest make it likely that CPR would be more effective in children than in adults (as seen in animal models).

An increased duration of CPR has a negative effect on cerebral blood flow and seems to be most detrimental in the infant preparation (Schleien et al., 1986; Sharff et al., 1984). The length of the no-flow period before CPR begins also has a negative effect on cerebral blood flow that is produced with CPR (Shaffner et al., 1999; Lee et al., 1984). The supratentorial brain blood flow during CPR is reduced more than brain-stem flow, because the preceding ischemic interval is increased (Shaffner et al., 1998, 1999). The cause of these detrimental effects on cerebral blood flow is unclear. Tissue hypoxia resulting in loss of vascular tone that eventually becomes unresponsive to vasoconstrictors, pulmonary edema, capillary leak, and (with prolonged duration of CPR) chest wall deformity are factors that are likely to contribute. It remains obvious that a short ischemic period and quick resuscitation improve the eventual outcome.

ICP is another patient-related factor with effect on the circulation produced during CPR. ICP can represent the downstream pressure for cerebral blood flow, and if elevated it can inhibit cerebral perfusion. Increases in intrathoracic pressure with closed-chest CPR cause ICP increases (Rogers et al., 1979). This relationship is linear, and one third of the increase in intrathoracic pressure generated by chest compression is transmitted to ICP (Guerci et al., 1985). The carotid arteries and jugular veins do not appear to be involved in the transmission of intrathoracic pressure to the intracranial contents. The transmission can be partially blocked by occluding cerebrospinal fluid or vertebral vein flow (Guerci et al., 1985). The rise in ICP with chest compressions becomes more significant in the setting of baseline increased ICP (an increase to two thirds of intrathoracic pressure is transmitted to ICP). The efficacy of CPR to perfuse the brain deteriorates markedly in the face of elevated ICP. When increased ICP is suspected (i.e., a child with hydrocephalus or head trauma) the ICP should be lowered early in the resuscitation (i.e., shunt tapped, or hematoma drained) to increase effectiveness of chest compressions to perfuse the brain.

Volume status, more specifically, hypovolemia, is another patient-related factor that can have an impact on effectiveness of chest compressions. There are little data to address the impact of volume status on blood flow during chest compression. Animal models include the administration of fluid (30 mL/kg or to a right atrial pressure of 6 to 8 mm Hg) before inducing cardiac arrest in fasted animals to improve the effectiveness of CPR (Sanders et al., 1990; Eleff et al., 1995).

Compression Rate and Duty Cycle

Compression rate is the number of cycles per minute. Duty cycle is the ratio of the duration of compression phase to the entire compression-relaxation cycle, expressed as a percentage. For example, at the recommended rate of 100 compressions per minute, the total cycle for compression and relaxation is 0.6 seconds (100 compressions × 0.6 seconds/compressions = 1 minute). A 0.36-second compression time produces a 60% duty cycle (0.36 sec/0.6 sec = 60%). The impact of duty cycle differs between the two mechanisms of blood flow (Table 38-5). In 1986 the American Heart Association Guidelines for CPR and Emergency Cardiac Care recommended increasing the rate of chest compressions from 60 to 100 per minute. This change represented a compromise between advocates of the thoracic-pump mechanism and those of the cardiac-pump mechanism (Feneley et al., 1988). The mechanics of these two theories of blood flow differ, but a faster compression rate could augment both.

In the cardiac-pump mechanism of blood flow during CPR, direct cardiac compression generates blood flow, and the force of compression determines the stroke volume per compression. Prolonging the compression (increasing the duty cycle) beyond the time necessary for full ventricular ejection fails to produce any additional increase in stroke volume in this model. Also, increasing rate of compressions increases cardiac output, because a fixed ventricular blood volume ejects with each cardiac compression. Therefore, in the cardiac-pump mechanism, blood flow is rate sensitive and duty-cycle insensitive. In the thoracic-pump mechanism, the reservoir of blood to be ejected is the large capacitance of the intrathoracic vasculature. With the thoracic pump mechanism, increasing either force of compression or duty cycle enhances flow by emptying more of the large intrathoracic capacity. Changes in compression rate have less effect on flow over a wide range of rates (Halperin et al., 1986a). Blood flow in the thoracic pump mechanism is generally duty-cycle sensitive but rate insensitive. With an increase in duty cycle, the percentage of time in compression is prolonged, but time for relaxation becomes decreased and venous return may become inhibited. At slow compression rates, the ability to hold a compression to prolong the duty cycle becomes physically demanding. The increased ability of a rescuer to produce a 50% duty cycle at a rate of 100 (compared with 60) compressions per minute is the reason behind the compression rate change recommendation in the 1986 American Heart Association guidelines for CPR.

The no-flow fraction and measurement of compressions delivered are important factors in the continued recommendation of a rate of 100 compressions per minute. The NFF is the percentage of time that compressions are interrupted. The interruption of compressions not only produces a no-flow time but also reduces effectiveness of the initial compressions on the resumption of chest compressions. The NFF in CPR performed by bystanders during out-of-hospital cardiac arrest (OHCA) has been reported to be 48% (Wik et al., 2005). For IHCA, a NFF of 24% has been reported with a sensing monitor and defibrillator (Abella et al., 2005). Reducing the pauses for ventilations from a compression/ventilation ratio of 15:2 to 30:2 in a bystander model of CPR on a manikin reduced NFF from 33% to 22% (Betz et al., 2008). Tracheal intubation in OHCA resulted in a reduction of NFF from 61% to 41% (p = 0.001) (Kramer-Johansen et al., 2006). Preshock pause also contributes to the NFF. Automatic external defibrillators (AEDs) create a variable preshock pause of 5 to 28 seconds. A 5-second increase in preshock pause was associated with a decrease in shock success (p = 0.02), and shock success fell from 94% if the pause was for fewer than 10 seconds to only 38% if it was longer than 30 seconds (Edelson et al., 2006). Interruptions of compressions for delivery of ventilation by tracheal intubation and for AED analysis can be eliminated by using a manual defibrillator. The goal is to have only a 10-second interruption every 2 minutes (120 seconds) for compressor change and rhythm analysis, resulting in an 8% NFF.

The number of compressions delivered per minute may differ from the compression rate. The compressor (see later section on teamwork for roles assumed during intraoperative CPR) may be delivering compressions at a rate of 0.6 seconds per cycle (100 compressions per minute), but if each minute there are 10 seconds of held compressions, then the number of compressions delivered at this rate falls to 83 compressions per minute. In the analysis of a 1-minute segment in which there were 15 seconds of held compressions, this same compression rate (0.6 seconds per cycle) resulted in a decrease to 75 compressions delivered to the patient. In an OHCA study of adults, the use of a compression rate of 121 resulted in the number of compressions being delivered at 64 per minute (Wik et al., 2005). The accomplishment of 80 chest compressions per minute has been correlated with successful resuscitation in an animal model (Yu et al., 2002). Resuscitation team members in the roles of compressor and leader need to be aware of how many actual compressions per minute are occurring and minimize interruptions to keep the rate of compressions delivered above 80 per minute. In an intubated patient, compressing at a rate of 100 per minute and only stopping for 10 seconds every 2 minutes to change compressors and perform pulse checks and rhythm analysis results in 92 compressions delivered per minute.

Compression force is the pressure and the acceleration applied to the chest. There are accelerometers available to monitor and provide feedback about the compression force applied with each compression, but they are not typically available in intraoperative resuscitation. The compression depth is the amount of anterior-posterior displacement provided by a compression and is related to the compression force applied and the compliance of the chest wall. Compression depth for adult patients is recommended as 38 to 50 mm (1.5 to 2 in) and as one third to one half of the anterior-posterior chest diameter for children and infants. Literature on adults indicates that a depth of 38 mm is not often achieved during resuscitation. Compression depths less than 38 mm occurred for 37% of compressions during IHCA on adults (mean depth 43 mm for all compressions) and for 62% of compressions during OHCA (mean depth 34 mm) (Abella et al., 2005; Wik et al., 2005). When studying the importance of adequate compression depth on the success of shocks delivered during IHCA, it was found to be that a 5-mm increase in compression depth improved first shock success (p = 0.028) (Edelson et al., 2006). Compression depth in a pediatric manikin model was 14 mm with the two-thumbs techniques vs. 9 mm with the two-finger technique (p < 0.001), indicating that the two-thumb technique is more effective for depth of compression (Udassi et al., 2009). In the operating room, the leader and recorder can assess the depth of compressions provided by the compressor and remind the compressor to achieve the suggested depth. The improvement in Etco2 production should be noted when increasing compression depth with a goal of maximizing Etco2 levels, which should correlate with blood flow through the lungs and vital organs.

Full recoil of the chest and avoiding any pressure during the release of compression (avoiding leaning on chest) is a key concept in the performance of chest compressions. The native chest recoil leads to increased negative intrathoracic pressure that augments venous blood return and blood ejection with subsequent compression. An animal model of incomplete recoil during active compression-decompression CPR resulted in increased intrathoracic pressure and reduced systemic arterial pressure, MPP, and CPP. The decrease in cerebral perfusion was related to the decrease in systemic arterial pressure rather than an increase in ICP (Yannopoulos et al., 2005b). In humans the effect of incomplete recoil on intrathoracic pressure can be similar to the use of excessive rates or durations of ventilation and is likely to result in less effective CPR because of poor venous return (Aufderheide and Lurie, 2004). In a study of pediatric IHCA, a feedback device alerted the compressor to leaning. Leaning was present in 97% of nonfeedback compressions and 89% of feedback compressions when defined as force applied to the chest of more than 0.5 kg and present in 83% of the nonfeedback compressions and 71% of the feedback compressions when defined as depth applied to the chest of more than 2 mm (Niles et al., 2009). It is interesting that a feedback device during CPR was effective in causing a lower rate of leaning, but the majority of compressions still resulted in this complication in their patients despite feedback. The importance of this concept, that venous return and blood flow are related to chest recoil, has led to the development of alternative methods of providing compressions or ventilations to improve chest recoil during release of compression (active decompression CPR and some other techniques are discussed in a later section). Prevention of leaning on the chest during release of compression is difficult and may require the use of both visual and audio feedback devices and alternative methods of CPR. A team approach can be tried during an intraoperative cardiac arrest with the recorder and leader assessing the compressor and advising if full recoil appears to be inhibited by leaning on the chest between compressions.

Distribution of Blood Flow During Cardiopulmonary Resuscitation

Overall blood flow to tissues is decreased during CPR as compared with the normal physiologic state. A redistribution of blood flow during CPR favors perfusion to the heart and brain. This redistribution toward vital organs should enhance outcome. Maintenance of myocardial blood flow during CPR is necessary for ROSC, and maintenance of cerebral blood flow determines quality of neurologic outcome.

Distribution of blood flow to both the heart and brain during CPR is influenced by the development of regional gradients. Distribution of blood flow to the brain depends on development of three regional gradients: the intrathoracic-suprathoracic gradient, the intracranial-extracranial gradient, and the caudal- rostral gradient. The intrathoracic-suprathoracic gradient provides flow of oxygenated blood from the chest to the upper extremities and head. Either venous collapse secondary to elevated intrathoracic pressure or closure of anatomic valves in the jugular system prevents the transmission of intrathoracic pressure to the suprathoracic venous system (Rudikoff et al., 1980; Niemann et al., 1981; Fisher et al., 1982). When CPR is effective, arterial collapse does not occur and elevated intrathoracic pressure results in a gradient that promotes suprathoracic blood flow. The intracranial-extracranial gradient directs blood to the brain away from extracranial suprathoracic vessels and toward intracranial vessels. α-Adrenergic agonists constrict extracranial vessels but have little effect on intracranial vessels, resulting in increased intracranial blood flow. Use of the vasoconstrictor epinephrine increases intracranial blood flow while decreasing flow in the extracranial structures of skin, muscle, and tongue (Schleien et al., 1986). The caudal-rostral gradient occurs within intracranial vessels. The relatively low-flow state of CPR seems to increase the distribution of flow to caudal areas of the brain. Ischemia preceding CPR significantly increases the distribution of flow to these areas (Michael et al., 1984; Shaffner et al., 1998, 1999). This pattern of caudal redistribution of flow also occurs in other models of global ischemia and provides preferential perfusion of the brain stem (Jackson et al., 1981). Although brain-stem resuscitation is necessary for survival, this propensity for sparing of caudal circulation after either prolonged ischemia or prolonged CPR raises the concern for producing a victim who survives with only brain-stem function.

Myocardial blood flow does not have the advantage of a large extrathoracic pressure gradient that augments cerebral flow. The thoracic pump generates equal increases in all intrathoracic structures. This lack of a gradient can result in poor myocardial blood flow during external chest compressions. Several studies have shown much lower blood flow to the myocardium compared with the cerebrum during closed-chest CPR (Ditchey et al., 1982; Michael et al., 1984; Schleien et al., 1986). The type of CPR influences the production of myocardial blood flow. Methods that are more likely to cause direct cardiac compression, such as high-impulse CPR, result in increased myocardial blood flow (Ditchey et al., 1982; Maier et al., 1984). Myocardial blood flow may be present only during relaxation of chest compression, correlating with a diastolic pressure, or in other methods seen during compressions correlating with a systolic pressure (Cohen et al., 1982; Maier et al., 1984; Michael et al., 1984; Schleien et al., 1986). Regional flow within the heart also changes during CPR, with a shift in the ratio of subendocardial/subepicardial blood flow from the normal 1.5:1 to 0.8:1 (Schleien et al., 1986). This ratio reverts to normal with epinephrine administration.

Conventional Cardiopulmonary Resuscitation

Conventional CPR includes closed-chest compressions delivered manually with ventilations interposed after every fifth, fifteenth, or thirtieth compression (see Table 38-4 for basic life support procedures). This method of CPR can be delivered in any setting without additional equipment and with a minimum of training. No large randomized study exists to demonstrate the superiority of any alternative method of CPR over conventional CPR.

Rescuer fatigue is a major problem with manual CPR in the field. Individual variation among rescuers performing manual CPR is another problem both in the field and in the laboratory. Mechanical devices are available to deliver chest compressions to prevent fatigue and to standardize compression delivery. Mechanical devices are presently limited to adult CPR and are not recommended for children (AHA, 2006a). The overall low efficacy of conventional CPR has led to investigations of multiple CPR modalities. The methods usually reflect attempts to enhance the contribution of the thoracic pump or cardiac pump to blood flow during CPR (Table 38-5). For example, the use of both hands to encircle the chest of an infant while using the thumbs to apply sternal compression attempts to both raise intrathoracic pressure and compress the heart (Todres and Rogers, 1975; David, 1988). This two-thumb encircling technique of CPR generates higher blood pressures and is recommended over the two-finger technique for infants (Dorfsman et al., 2000).

Blood flow to other organs during CPR is usually reduced compared with flow to the brain and heart. The lack of valves in infrathoracic veins causes retrograde transmission of venous pressure and decreases the gradient for blood flow below the diaphragm in animals (Brown et al., 1987b). Regional blood flows for infrathoracic organs (e.g., small intestine, pancreas, liver, kidneys, and spleen) during CPR are usually less than 20% of arrest rates before cardiac arrest and often close to zero (Koehler et al., 1983; Voorhees et al., 1983; Michael et al., 1984; Sharff et al., 1984). The addition of abdominal compressions does not alter the infrathoracic organ blood flow (Koehler et al., 1983; Voorhees et al., 1983). Administration of epinephrine during closed-chest CPR almost eliminates flow to the subdiaphragmatic organs, with the exception of the adrenal glands (Ralston et al., 1984). There are little data available regarding blood flow to the lungs during CPR. Pulmonary blood flow occurs primarily at times of low intrathoracic pressure during closed-chest CPR (Cohen et al., 1982). High extrathoracic venous pressure builds up during compression and results in pulmonary filling during relaxation as intrathoracic pressure falls. Resuscitation methods that lower intrathoracic pressure may augment pulmonary vascular filling. Leaning on the chest during relaxation of compression and maintenance of increased ventilation pressures may prevent the fall in intrathoracic pressure between chest compressions and decrease pulmonary venous return and blood flow.

Alternative Methods of Cardiopulmonary Resuscitation

Simultaneous Compression-Ventilation Cardiopulmonary Resuscitation

Simultaneous compression-ventilation CPR (SCV-CPR) represents a technique designed to augment conventional CPR by increasing the contribution of the thoracic pump mechanism to blood flow. Delivering ventilation simultaneously with every compression (instead of interposed after every fifth compression) adds to intrathoracic pressure and potentially augments blood flow produced by conventional chest compressions. This method is felt to increase the perfusion gradient to the brain but has little effect on the myocardial perfusion gradient perfusion to the heart. Animal models suggested that SCV-CPR increases carotid blood flow compared with conventional CPR and show an advantage of SCV-CPR in large canine models (Koehler et al., 1983; Luce et al., 1983). No advantage is seen over conventional CPR in infant pigs and small dogs, perhaps because in small animals the compliance of the chest allows more direct cardiac compression and higher intravascular pressure than with conventional CPR (Babbs et al., 1982a, 1982b; Sanders et al., 1982; Schleien et al., 1986; Dean et al., 1987, 1990; Berkowitz et al., 1989). Human studies comparing SCV-CPR with conventional CPR show minimal improvement or detrimental effect on the coronary perfusion pressure (Harris et al., 1967; Martin et al., 1986). Survival is worse in both animals and humans when SCV-CPR is compared with conventional CPR (Sanders et al., 1982; Krischer et al., 1989). No study has shown an increased survival with this CPR technique despite the potential for increased brain perfusion.

Intraoperative arrest should be managed with endotracheal intubation, and compressions should be delivered at a 10:1 ratio with ventilations. Compressions do not have to be held for ventilations once the patient is intubated. The delivery of a ventilation breath that occurs simultaneously with a chest compression mimics SCV-CPR and may have some benefit on cerebral perfusion but is unlikely to help myocardial perfusion. The team members in the roles of airway, monitor, and leader should be careful that the rate and duration of ventilations do not significantly increase the percentage of time with increased intrathoracic pressure and inhibit venous return.

High-Impulse Cardiopulmonary Resuscitation

High-impulse CPR involves the application of force that is greater than usual during chest compression. This increase in force can be in the form of greater mass, greater velocity, or both. It is hypothesized that the larger impulses result in greater chest deflection, causing more contact with the heart (Kernstine et al., 1982). Direct cardiac compression is more likely with this form of closed-chest CPR. High-impulse CPR can generate myocardial blood flows as high as 60% to 75% of values before cardiac arrest (Maier et al., 1984). In humans, high-impulse CPR generates increased aortic pressures (Swenson et al., 1988). An outcome study in dogs compared high-impulse CPR with conventional closed-chest CPR and found no significant improvement in resuscitation, survival, or neurologic outcome (Kern et al., 1986). The application of this benefit of greater force is the same for an intraoperative cardiac arrest, resulting in an increased likelihood of cardiac compression, which then results in higher myocardial and cerebral blood flow; however, the risk is that there is potential for increasing chest deformation and trauma.

Negative Intrathoracic Pressure Methods

Active compression-decompression CPR (ACD-CPR) requires a device that attaches to the chest and allows the rescuer to pull up on the sternum and decompress the thorax between compressions. The theoretical advantages of decompressing the chest between compressions include restoring chest wall shape and creating a negative intrathoracic pressure that pulls gas into the lungs and pulls blood into intrathoracic vessels. These characteristics allow for more effect from the subsequent compression, because more intrathoracic pressure can be generated and more blood is available to be ejected. Preliminary studies in humans have shown that after advanced cardiac life support failed, ACD-CPR was more effective than standard CPR at improving hemodynamic variables (Cohen et al., 1992). After IHCA, more patients had ROSC, survival at 24 hours, and a better Glasgow coma score when they received ACD-CPR then when standard CPR was given (Cohen et al., 1993). A larger study of IHCA victims failed to show any difference in resuscitation or outcomes between patients receiving ACD-CPR or standard CPR (Stiell et al., 1996). Several large studies of patients who suffered an OHCA did not find a difference in effectiveness of ACD-CPR or standard CPR for improving ROSC incidence, hospital admission, hospital discharge, or short-term neurologic outcome (Lurie et al., 1994; Schwab et al., 1995; Mauer et al., 1996; Stiell et al., 1996; Nolan et al., 1998).

Complication rates were not different after ACD-CPR or standard CPR in most studies (Lurie et al., 1994; Schwab et al., 1995; Mauer et al., 1996). It is interesting that the same study that showed that ACD-CPR had more complications than standard CPR (hemoptysis and sternal dislodgment) was also one of the few large studies that found ACD-CPR more effective than standard CPR for OHCA (Plaisance et al., 1997). ACD-CPR has been combined with an airway device to increase negative intrathoracic pressure—the impedance threshold device (ITD; see the next paragraph)—and has been mechanized to allow continuous application without the need to change rescuers and ease use during transport (see the section on mechanical methods). ACD-CPR is considered an optional technique for adults, and there are no data on which to base a recommendation for children.

The ITD is a device on the ETT or face mask that impedes inflow of inspiratory gas during chest reexpansion between CPR compressions when rescuers are not actively ventilating the patient. Impedance of gas inflow promotes negative intrathoracic pressure development during chest reexpansion. This increase in the negative intrathoracic pressure facilitates, by chest recoil, blood return to the thorax before the next chest compression (Lurie et al., 2002). The use of an ITD has been shown to improve coronary perfusion pressure and vital organ blood flow with both standard and ACD-CPR in adult and pediatric animal models (Langhelle et al., 2002; Voelckel et al., 2002). Improved levels of Etco2, diastolic pressure, and coronary perfusion pressure occurred in a prospective, randomized controlled trial in adults undergoing ACD-CPR with ITD compared with ACD-CPR without ITD. A decrease in time to achieve a ROSC was also seen with ACD-CPR with ITD (Plaisance et al., 2000). A prospective controlled trial comparing standard CPR without an ITD and ACD-CPR with an ITD found significantly improved short-term survival (24 hours) in adult patients in the group that had ACD-CPR and an ITD (Wolcke et al., 2003). The use of an ITD with standard CPR in an OHCA trial adult of adults failed to show significant improvements in outcome for ITD vs. a sham ITD except in a subgroup with pulseless electrical activity (PEA) (Aufderheide et al., 2005). A separate study showed an improvement in short-term survival for standard CPR with an ITD vs. historical controls (Thayne et al., 2005). A no-ventilation study showed hypoxemia developing in animals that received either standard CPR with ITD or ACD-CPR with ITD but not in the animals that received standard CPR alone (Herff et al., 2007). Standard CPR with and without ITD in a ventricular-fibrillation cardiac arrest (VFCA) model in pigs showed no effect on MPP and no effect on survival in one study and worse survival with ITD in another study (Menegazzi et al., 2007; Mader et al., 2008). Further studies are needed to determine the effectiveness of the use of an ITD for pediatric resuscitation.

The ITPR combines an ITD with a vacuum to maintain a negative intratracheal gradient (-10 cm H2O) during CPR while allowing positive pressure ventilation. An ITD relies on chest elastic properties such as outward recoil of thorax and proper CPR technique (no leaning during relaxation) to allow full recoil and generation of a negative intrathoracic pressure, whereas ITPR overcomes these limitations. An ITPR in a porcine model of VFCA was able to maintain negative intrathoracic pressure with ACD-CPR; the result was improved hemodynamic measurements and survival with no effect on ventilation (Yannopoulos et al., 2005a, 2006). ITPR has not been evaluated on asphyxial cardiac arrest or a pediatric model.

Abdominal Methods

Abdominal binding and military antishock trousers (MASTs) have been used to augment closed-chest CPR. Both methods apply continuous compression circumferentially below the diaphragm. Abdominal binding theoretically augments CPR by decreasing the compliance of the diaphragm that results in increased intrathoracic pressure, forcing blood out of the subthoracic structures to increase the circulating blood volume (an autotransfusion effect), and increasing the resistance in subdiaphragmatic vasculature, which increases suprathoracic blood flow. The increases in intrathoracic pressure and blood volume lead to increases in aortic pressure and carotid blood flow in both animals and humans (Chandra et al., 1981b; Lilja et al., 1981; Lee et al., 1981; Koehler et al., 1983; Niemann et al., 1984). Unfortunately, as the aortic pressure increases, the right atrial diastolic pressure increases to a greater extent, resulting in a decrease in the coronary perfusion pressure (Sanders et al., 1982; Niemann et al., 1984). This deterioration of coronary perfusion pressure is coincidental with a decreased myocardial blood flow (Niemann et al., 1984). This technique also decreases CPP because transmission of the intrathoracic pressure to the intracranial vault raises the ICP (Guerci et al., 1985). Use of abdominal binders or MASTs to augment CPR does not increase survival in clinical studies (Sanders et al., 1982; Mahoney and Mirick, 1983; Niemann et al., 1990). Liver laceration from CPR performed with an abdominal binder has been reported but is no more common than with conventional CPR (Harris et al., 1967; Redding, 1971; Rudikoff et al., 1980; Mahoney and Mirick, 1983; Niemann et al., 1984). A recent study using a contoured abdominal cuff in a VFCA model in pigs found that at pressures over 200 mm Hg, abdominal binding increased MPP over standard CPR and urges a reconsideration of this technique (Lottes et al., 2007). The potential benefits over vasoconstrictor medications are that there is no need for access to the circulation, and a brisk withdrawal is possible when spontaneous circulation returns, avoiding the postresuscitation issues typical of vasoconstrictor administration. There is a lack of data in children to support the use of these techniques clinically during CPR, and potential for complications would discourage their application.

Only abdominal compression CPR (OAC-CPR) is a new method that uses only rhythmic compressions of the abdomen during resuscitation efforts to avoid the rib fractures that occur with standard chest compressions. A VFCA model in pigs comparing OAC-CPR to standard CPR showed 60% greater myocardial perfusion than did standard CPR (Geddes et al., 2007). Further evaluation is needed of the potential benefits of this technique in situations in which chest compressions need to be held, such as in the postsurgical cardiac patient when the chest is reopened during resuscitative efforts. This technique could potentially provide some brain and heart perfusion during the no-flow state while initial or subsequent sternotomy takes place (Adam et al., 2009).

Combined Abdominal and Chest Compression Methods

Interposed abdominal compression CPR (IAC-CPR) is the delivery of an abdominal compression during the relaxation phase of chest compression. IAC-CPR may augment conventional CPR by increasing venous return to the chest during the abdominal compression-chest relaxation phase and “priming the pump”; increasing intrathoracic pressure during abdominal compression, adding to the duty cycle of the chest compression; and sending blood retrograde to the carotids or coronaries because of abdominal compression on the aorta (Ralston et al., 1982; Voorhees et al., 1983; Einagle et al., 1988). Several studies have shown hemodynamic improvements secondary to IAC-CPR. In animals, cardiac output and cerebral and coronary blood flow improved when IAC-CPR was compared with conventional CPR in adult models but not in an infant swine model (Ralston et al., 1982; Voorhees et al., 1983; Walker et al., 1984; Einagle et al., 1988; Eberle et al., 1990). Studies in humans have also shown an increase in aortic pressure and coronary perfusion pressure during IAC-CPR compared with conventional CPR (Berryman and Phillips, 1984; Howard et al., 1984, 1987; Ward et al., 1989; Barranco et al., 1990; Chandra et al., 1990). Although one study reports a 10% aspiration rate, most report no aspiration or liver lacerations (Voorhees et al., 1983; Berryman and Phillips, 1984; Walker et al., 1984; Mateer et al., 1985; Einagle et al., 1988; Ward et al., 1989; Barranco et al., 1990; Sack et al., 1992). Clinically, IAC-CPR requires extra manpower or equipment and remains experimental. Outcome studies have mixed results, showing no increase in survival with OHCAs but increased survival with IHCAs (Mateer et al., 1985; Sack et al., 1992). Whereas IAC-CPR may serve as an alternative technique for in-hospital CPR in adults, a lack of data prevents a recommendation for the use of IAC-CPR in children.

Phased chest abdominal compression-decompression CPR (PCACD-CPR) is another manual method that combines chest and abdominal compressions (Tang et al., 1997). PCACD-CPR resembles a combination of ACD-CPR and IAC-CPR. It requires a device (Lifestick) that attaches to both the abdomen and chest and alternately compresses and reexpands both structures. It offers the theoretic advantages of both methods, because the chest shape is restored and blood and gas are pulled into the thorax during active chest decompression and blood flow is augmented because of compression and active decompression of the abdomen. MPP, ROSC, short-term survival, and neurologic outcome were improved in a porcine model of VFCA with resuscitation using PCACD-CPR (Tang et al., 1997). The use of the Lifestick proved safe and feasible in adults with cardiac arrest in the emergency room (Havel et al., 2008). Further information is required before these methods can be recommended for pediatric patients.

Mechanical Methods

Mechanical methods of producing chest compressions have continued interest, with the focus on minimizing interruption of compressions (to change compressors who fatigue) and reducing the NFF. Mechanical devices that provide compressions would not have pauses every 2 minutes to replace the compressor team member, would provide consistent quality of compressions, and improve the quality of compressions during patient transport. Compressions during radiation exposure for interventional procedures would be less hazardous. Several mechanical devices are currently being used for CPR.

Vest CPR uses an inflatable bladder that is wrapped circumferentially around the chest and is cyclically inflated. This method of delivering chest compressions by diffuse application of pressure has two unique characteristics. First, the increase in intrathoracic pressure occurs with only minimal change in chest dimensions, making direct cardiac compression unlikely (an almost pure thoracic-pump technique). Second, the diffuse distribution of pressure decreases the likelihood of trauma. Vest CPR in dogs improves cerebral and myocardial blood flows as well as survival when compared with conventional CPR (Luce et al., 1983; Criley et al., 1986; Halperin et al., 1986a, 1986b). In a pediatric model of vest CPR, only 3% permanent chest deformation occurred after 50 minutes of vest CPR compared with almost 30% deformation produced by an equivalent period of conventional CPR (Schleien et al., 1986; Shaffner et al., 1990). In humans, vest CPR increases aortic systolic pressure but does not significantly increase diastolic pressure compared with conventional CPR (Swenson et al., 1988). In a preliminary study of vest CPR in victims of OHCA, increased aortic and coronary perfusion pressure were demonstrated, and there was a trend toward a greater ROSC compared with standard CPR (Halperin et al., 1993). Clinically, use of vest CPR depends on sophisticated equipment, and the technique remains experimental at this time.

Load-distributing band CPR (LDB-CPR) is a modification of vest CPR that uses an automated device to provide compressions with a self-adjusting band across the anterior chest. Less equipment is required than with the use of the vest, making this technique better for OHCA. Both the vest and LDB-CPR provide compressions over a broader area of the chest than standard CPR, reducing potential for inducing trauma during compressions. An initial study with LDB-CPR for adult OHCA showed an increase of ROSC vs. historical controls receiving standard CPR (Casner et al., 2005). LDB-CPR showed improved survival to discharge vs. standard CPR with historical controls for adult OHCA (Ong et al., 2006). A randomized study of LDB-CPR vs. standard CPR was halted for worse neurologic outcome and a trend toward worse survival than manual CPR (Hallstrom et al., 2006).

The Lund University Cardiopulmonary Assist System (LUCAS) is a mechanical device developed in Sweden to provide active compression-decompression CPR. In adult OHCA with LUCAS for CPR, the 30-day survival was 25% when it was applied within 15 minutes for witnessed cardiac arrests and 0% when applied after 15 minutes of cardiac arrest (Steen et al., 2005). The incidence and patterns of injury with the LUCAS are similar to those with manual CPR (Smekal et al., 2009). A report of five patients with IHCA showed that LUCAS ensured effective uninterrupted compressions during transport and during procedures in the cardiac catheterization laboratory (Bonnemeier et al., 2009).

Periodic acceleration CPR (pGz-CPR) is a method that produces rapid motion of the supine body in a headward-footward pattern that produces both circulation and ventilation with a decreased risk of rib fractures compared with standard CPR. In a VFCA model in pigs, pGz-CPR produced superior neurologic outcome compared with standard CPR (Adams et al., 2003). In an asphyxial cardiac arrest model in pigs, pGz-CPR produced equivalent outcomes with no broken ribs compared with standard CPR, during which 25% of animals received rib fractures (Adams et al., 2008).

In summary, multiple models of mechanical CPR are available and have the potential to provide continuous high quality compressions in many situations that would make compressions difficult (transport) or risky (fluoroscopy). None is sufficiently studied to deserve recommendation for intraoperative use in children.

Monitoring the Effectiveness of Resuscitative Efforts

The brain and heart are the organs most likely to suffer irreversible damage if resuscitation efforts do not provide adequate blood flow and oxygen delivery. The table below lists several methods that can be used during resuscitation to determine whether efforts are effective in the restoration of adequate perfusion to these vital organs (Table 38-6). It is important to determine whether restoration of perfusion by these efforts is adequate to prevent neurologic injury and to allow ROSC. The determination that resuscitation efforts are ineffective can prompt attempts to improve resuscitation or, if improvement attempts fail, to decide that resuscitation is futile and terminate efforts. If resuscitation efforts are determined to be inadequate, then interventions can be made to improve effectiveness and eventual patient outcome. These interventions include improving performance of compressions (i.e., increasing depth or replacing the fatigued compressor), administering fluid to improve intravascular volume, or administering vasoconstrictors to improve vascular tone. When the resuscitation efforts are determined to be ineffective and unable to be improved (as in prolonged cardiac arrest before CPR) this information aids in the decision that continued resuscitation is futile and efforts should be stopped.

TABLE 38-6 Techniques for Monitoring Cardiopulmonary Resuscitation Effectiveness

Technique Monitoring Device Goal
Level of consciousness Examination Consciousness
Return of spontaneous circulation Examination, arterial catheter, Etco2 Spontaneous circulation
Pulse during compressions Examination Arterial pulsation
Etco2 Quantitative Etco2 >10 mm Hg
Arterial diastolic (relaxation) pressure Arterial catheter >15 mm Hg
Mixed venous saturation Central venous catheter >30%
Venous-arterial CO2 difference Arterial and central catheter Decreased difference
Amplitude of VF Electrocardiogram Increased amplitude
Frequency of VF Electrocardiogram Decreased frequency
AMSA ECG and software for analysis >13 mV Hz
Transthoracic impedance AED and software for analysis Decreased by compression

Etco2, End-tidal carbon dioxide; CO2, carbon dioxide; VF, ventricular fibrillation; AMSA, amplitude spectrum area; ECG, electrocardiogram; AED, automated external defibrillator; mm Hg, millimeters of mercury, mV, millivolts; Hz, hertz.

The level of consciousness can improve if resuscitation efforts are effective. Occasionally, a patient with a nonperfusing rhythm regains consciousness during chest compressions only to become unresponsive when compressions are held and perfusion falls. This may recur repeatedly until a perfusing rhythm is restored. Return of consciousness is evidence of highly effective resuscitative efforts, but it is rare that the level of perfusion required for this to occur can be accomplished. Additionally, in the operating room the victim of cardiac arrest may have the evaluation of their level of consciousness masked by the use of anesthetic agents. This technique is unlikely to be helpful in most intraoperative arrests.

The restoration of spontaneous circulation is another indicator of adequate resuscitative efforts. This is usually a sign that perfusion to the heart muscle is adequate to allow effective contractions. The temptation is often to hold resuscitation efforts when spontaneous ejection occurs, but spontaneous circulation may not be adequate or sustained, and continued resuscitation efforts may be required. Compressions must be continued if spontaneous circulation does not adequately perfuse vital organs.

The palpation of a pulse during chest or cardiac compressions may be a sign that significant arterial pressure is being generated. Unfortunately, the palpable pulse may represent only peak arterial pressure during compression and be accompanied by a lack of significant relaxation (diastolic equivalent) pressure necessary for coronary perfusion. Because significant coronary perfusion occurs during relaxation, a palpable peak pulse may not represent effective CPR. Additional concerns about the reliance on palpation to determine the effectiveness of resuscitation are that the palpated artery is usually next to a large vein and that retrograde venous pulsations may occur in the absence of significant arterial blood flow. There are no data on when, or if, palpation of pulsations during chest compressions correlates with ROSC or outcome.

One of the most useful ways to measure the effectiveness of chest or cardiac compressions to generate blood flow is the use of quantitative Etco2 monitoring. These monitoring devices are readily available in areas where anesthesia is administered. The detection of Etco2 during compressions demonstrates that venous blood is being moved through the lungs in sufficient quantity that CO2 is available for measurement with ventilation. The level of Etco2 increases as compressions are more effective in increasing the pulmonary blood flow and the delivery of carbon dioxide in the venous blood to the lungs.

Low levels of Etco2 generated during compressions correlate with decreased levels of blood flow and decreased likelihood of ROSC. Etco2 levels measured during CPR that are less than 10 mm Hg predict an inability to restore spontaneous circulation in adults (Callaham and Barton, 1990; Wayne et al., 1995; Levine et al., 1997). Levels of Etco2 during CPR that are greater than 15 mm Hg predict ROSC in adults and children (Sanders et al., 1989; Bhende and Thomson, 1995; Barton and Callaham, 1991). Etco2 levels lower than 10 to 15 mm Hg during CPR indicate a decreased likelihood of success and should prompt institution of methods to improve resuscitation (i.e., better compressions and fluid or vasoconstrictor administration).

The measurement of Etco2 during CPR has also been used to detect low levels of cardiac output during PEA, ROSC during compressions, and the presence of spontaneous circulation during CPB (Garnett et al., 1987; Barton and Callaham, 1991; Gazmuri et al., 1991).

The technique of using Etco2 measurement during CPR does not require an ETT. Etco2 levels measured during CPR with bag mask or laryngeal mask ventilations also correlate with the likelihood of achieving ROSC (Nakatani et al., 1999). Another important consideration when using this technique is that the administration of bicarbonate to the victim causes a transient elevation in Etco2 without an elevation in blood flow that may be misinterpreted as improving CPR. Epinephrine administration has been associated with a transient drop in Etco2 despite an increase in MPP and may be misinterpreted as a worsening of CPR (Martin et al., 1990b). The cause of cardiac arrest may influence the initial Etco2 levels during resuscitation; higher levels of Etco2 are found with asphyxial arrest than with fibrillatory arrest (Grmec et al., 2003).

Invasive monitoring may be in use at the time of a cardiac arrest and may be helpful to determine the effectiveness of resuscitative efforts. An arterial catheter is necessary to determine aortic diastolic pressure during the relaxation phase of compressions. If an arterial catheter is present, arterial diastolic pressure (relaxation pressure) represents the MPP, and levels greater than 15 mm Hg are necessary for, but do not guarantee, ROSC in adult patients (Paradis et al., 1990). A central venous catheter is another invasive monitor that, if present, may be used to determine the central venous oxygen saturation during resuscitative efforts. The level of venous blood oxygen saturation correlates with effectiveness of resuscitation to produce blood flow and correlates with likelihood of ROSC (Snyder et al., 1991; Rivers et al., 1992). Patients with a mixed-venous oxygen saturation of less than 30% were unlikely to have ROSC (Rivers et al., 1992). The presence of both arterial and venous catheters allow sampling of simultaneous gases. The venous-arterial CO2 difference is approximately 5 mm Hg during native circulation, and this difference increases significantly as perfusion falls. During hypoperfusion tissue CO2 increases, venous CO2 increases, pulmonary blood flow falls, and ventilation removes a greater percentage of CO2 resulting in lower arterial CO2. As CPR is made more effective, this venous-arterial gradient decreases again.

The amplitude and frequency of VF can be determined from the electrocardiogram (ECG) or specific software built into AEDs and used to determine effectiveness of resuscitation. Typically, the VF waveform is initially coarse (high amplitude, low frequency) and deteriorates over time during ineffective CPR or prolonged cardiac arrest to fine (low amplitude, high frequency) VF. As blood flow perfusing the heart during CPR improves, VF reverts back to a coarse pattern that indicates that the heart is more readily converted by shock to allow ROSC. An index of amplitude and frequency-amplitude spectral area (AMSA ) has been correlated with the use of the MPP, Etco2 and likelihood of ROSC in a porcine model of VF arrest (Li et al., 2008). This index is calculated by software built into the AED, and a value of at least 13 mV Hz has been regarded as a critical threshold for defibrillation success.

The transthoracic impedance (TTI) is measured between the gel-coated pads applied to the chest during CPR by the use of an AED. The TTI can be continuously measured by these devices, and impedance decreases during compression in relation to the amount of blood movement through the chest. The potential exists to use this technique to monitor the amount of blood flow generated by compressions and determine whether there is continued effectiveness or if deterioration of compressions occurs with rescuer fatigue.

Analysis of the TTI waveform using an investigational monitor or defibrillator has been used to detect ROSC in adults. Like a sudden spike in Etco2 during CPR, changes in the TTI waveforms can signify ROSC. These methods can lead to earlier detection of ROSC, potentially eliminate the need to pause compressions to check pulse to determine ROSC, and eliminate the simultaneous no-flow state if ROSC has not yet occurred (Losert et al., 2007). Similar impedance measurements have been shown to detect the difference between pulsatile and pulseless rhythms during pulse checks that occur with in-hospital resuscitation (Risdal et al., 2008). The rapid determination of a pulseless rhythm would prompt resumption of compressions and reduce the no-flow interval during pulse checks. A brisk fall in Etco2 during pulse check would be another indication that a no-flow state is occurring and compressions should be restarted.

An additional monitoring benefit of TTI during CPR is the detection of ventilation or the loss of ventilation. Entry of gas into the lungs with ventilation causes an increase in TTI. The sudden loss of the cyclic decreases in TTI with ventilation may indicate displacement of the ETT during CPR (Pytte et al., 2007). Although loss of Etco2 is also used to indicate displacement of the ETT, Etco2 levels may be markedly diminished by low pulmonary blood flow during CPR. There is potential that the disappearance of impedance changes related to the failure of air entry with ETT displacement may be more reliable in situations of very low Etco2. The use of TTI to indicate loss of ventilation or ETT displacement might also eliminate the need to stop compressions to perform auscultation when ETT displacement is suspected. TTI also can be used as a training technique to review the number of compressions and ventilations delivered over the course of resuscitation at post- resuscitation feedback sessions.

The anesthesiologist may have many methods to choose from to determine the effectiveness of intraoperative resuscitation efforts. The equipment for quantitative determination of Etco2 is the most likely to be available in the majority of perioperative situations. In fibrillatory arrest, the restoration of a coarse pattern demonstrates effective CPR and when defibrillation attempts are most likely to be successful (Hayes et al., 2003). In the near future, TTI measurements by the defibrillator or AED may be available to monitor the amount of blood flow produced by resuscitative efforts, recognize ETT displacement, recognize ROSC or pulseless rhythms, and serve as training tools for feedback during debriefing sessions.

Vascular Access for Drug and Fluid Administration

Peripheral and Central Vascular Access

Vascular access is crucial to the effective administration of drugs and fluids for resuscitation, but it may be difficult to achieve in pediatric patients. During cardiac arrest, attempts to obtain peripheral venous access in infants and children should be limited, and if they are unsuccessful an intraosseous (IO) needle should be placed and or the administration of drugs may be started in the ETT. The American Heart Association (AHA) and the International Liaison Committee on Resuscitation (ILCOR) recommendations prioritize IO drug administration over endotracheal administration because of variable blood concentrations if a drug is given endotracheally (AHA, 2006b; ILCOR, 2006). Central venous access may be attempted during cardiac arrest by skilled providers, but attempts should not delay administration of life-saving medications via the peripheral IV or IO route.

The ideal placement of an intravascular catheter during CPR provides ready access to the anesthesiologist and minimizes interruption of resuscitation efforts. Peripheral venous access, IO access, and femoral venous access can usually be accomplished without interruption of airway management or chest compressions. The use of a saline flush for medications administered in peripheral IV access, IO access, and central lines with the catheter tip below the diaphragm improves medication delivery to the heart in the low-flow state of CPR. A flush with 5 to 20 mL of normal saline should drive the medication into the central circulation (0.25 mL/kg was effective in an animal model) (Orlowski et al., 1990). For most instances of CPR, peripheral IV access should be adequate for administration of resuscitation medications (Table 38-7).

TABLE 38-7 Vascular Access During CPR

Route Characteristics
Peripheral venous access (IV) Route of first choice if vascular access not present
  Rapidly and easily placed
  Any drug or fluid may be administered
  Flush each drug with 0.25 mL/kg normal saline (20 mL in adults)
Intraosseous access (IO) Easier to obtain in <6-year-old, can use for any age
  Any drug or fluid may be administered
  Flush with 0.25 mL/kg normal saline (20 mL in adult)
Endotracheal route (ETT) Use only if no IV or IO access
  Only administer naloxone, atropine, vasopressin, epinephrine, and lidocaine (NAVEL) drugs by ETT
  Note: ETT drug delivery requires 2-10 times IV dose
  Use 5 mL of normal saline in ETT to increase distribution into distal bronchial tree (10 mL in adults)
Central venous catheter Central access is first choice if already in place
  Place if no IV or IO is obtained
  Requires flush if catheter tip is below diaphragm
Cut-down saphenous Use when other options have failed
  Requires special skill, high complication rate

CPR, Cardiopulmonary resuscitation; IV, intravenous; IO, intraosseous; ETT, endotracheal tube.

Intraosseous Access

IO cannulation provides a rapid and safe route to vascular access via the bone marrow; this space is a noncompressible venous plexus and therefore reliably available when peripheral venous access is limited as a result of dehydration or peripheral vasoconstriction. Trained providers can obtain IO access within 30 to 60 seconds with a first-attempt success rate of approximately 80% (Brunette and Fischer, 1988; Guy et al., 1993; Fiorito et al., 2005). All drugs, crystalloids, colloids, and blood can be administered via this route. The onset and duration of action of emergency medications are the same when given by IO, central, or peripheral access during native circulation in dogs (Orlowski et al., 1990).

The preferred site for an IO needle in a child is the anterior tibia. Alternative sites include the distal femur, medial malleolus, and iliac crest. In older children and adults, the distal radius, distal ulna, proximal humerus, and the sternum (risk of cardiac laceration) are also considered appropriate sites (Fig. 38-2) (Glaeser et al., 1993; Guy et al., 1993; Waisman and Waisman, 1997; Calkins et al., 2000). Specially designed IO needles should be readily available to the pediatric anesthesiologist for such emergencies. Rapid deployment devices for IO needles have been developed and may increase ease of IO placement (Horton et al., 2008; Schwartz et al., 2008). The most common complication from IO access is displacement of the needle and extravasation of fluid and medication (12%) (Fiorito et al., 2005). Other rare complications include bone fracture, compartment syndrome, osteomyelitis, and fat embolism (Orlowski et al., 1989).

Intratracheal Medication Administration

The intratracheal route may be used for administration of lipid-soluble resuscitation medications. Because most anesthetized children have this route available, it should be considered early, particularly if vascular access is a problem or access to extremities is limited. Concerns related to variable delivery of medication and duration of effect make IO administration preferable to endotracheal administration in situations where IV access is not available.

Medications that can be administered via the ETT include the “NAVEL” drugs (naloxone, atropine, vasopressin, epinephrine, and lidocaine) (Wenzel et al., 1997; Efrati et al., 2003a). Studies suggest that similar doses given via the trachea achieve lower serum concentrations than when given by IV route (McDonald, 1985; Quniton et al., 1987; Jørgensen and Ostergaard, 1997; Kleinman et al., 1999). Lower serum concentrations of epinephrine may produce predominately β2-adrenergic effects, causing vasodilation and decreased coronary perfusion pressures (Vaknin et al., 2001; Efrati et al., 2003b). Because of this concern, the recommended intratracheal dose of epinephrine is 10 times the intravascular dose, with a maximum dose of 2 to 2.5 mg (Manisterski et al., 2002). Recommended intratracheal doses of atropine and lidocaine are two times the intravascular dose; there is no optimal dose recommendation for naloxone or vasopressin. Drugs administered via the endotracheal route may have prolonged effect because of the reservoir of drug in the pulmonary tree (Hornchen et al., 1989). Prolonged effects of resuscitative medications can be detrimental in the patient after cardiac arrest because of sustained afterload and myocardial oxygen demand.

The technique for tracheal administration is to flush the medication with 2 to 5 mL (2 mL in children, 5 mL in adolescents) of normal saline into the ETT and provide five manual ventilation breaths to deliver medication into distal airways and alveoli. This technique is favored over delivery via catheter or feeding tube because of ease and practicality (Jasani et al., 1994).

Drugs for resuscitation

Vasoactive Drugs

Adrenergic Agonists

Epinephrine has been the drug of choice during CPR since the 1960s. Redding and Pearson (1963) first described the use of adrenergic agonists during CPR and demonstrated that early administration of epinephrine during cardiac arrest improved the resuscitation success rate. The increase in diastolic pressure from increased systemic vascular resistance was shown to be responsible for the success of resuscitation when using adrenergic agents (Pearson and Redding, 1965).

In order to investigate the relative importance of α-adrenergic and β-adrenergic agonist actions during resuscitation, Yakaitis et al. (1979) used a canine model of cardiac arrest and found they could resuscitate only one in four animals that received both the pure β-adrenergic agonist, isoproterenol, and an α-adrenergic antagonist. In contrast, all the dogs treated with both an α-adrenergic agonist drug and a β-adrenergic antagonist were successfully resuscitated (Yakaitis et al., 1979). These data suggest that the α-adrenergic agonist action of epinephrine is responsible for successful resuscitation after cardiac arrest. Support for this theory was reported by Michael et al. (1984) who demonstrated that the effects of epinephrine during CPR are mediated by selective vasoconstriction of peripheral vessels, excluding those supplying the brain and heart. Epinephrine infusions maintain a higher aortic pressure and result in a higher perfusion pressure to both the heart and brain (Michael et al., 1984). Even with increases in both mean and diastolic aortic pressure, the flow to other, nonvital organs, such as the kidneys and small intestine, becomes compromised with intense vasoconstriction of their blood supply (Schleien et al., 1986; Michael et al., 1984; Koehler et al., 1985).

Effects on Coronary Blood Flow

The increase and maintenance of aortic diastolic pressure associated with administration of α-adrenergic agonists during CPR are critical for coronary blood flow and ultimately successful resuscitation. In the beating heart, the contractile state of the myocardium is increased by β-adrenergic receptor agonist action. During CPR, β-adrenergic drugs may stimulate spontaneous myocardial contractions and increase intensity of VF, but this ionotropic effect can result in increasing intramyocardial wall pressure, decreased coronary perfusion pressure, and diminished myocardial blood flow (Livesay et al., 1978). In addition, β-adrenergic stimulation increases myocardial oxygen demand by increasing cellular metabolism and oxygen consumption. The superimposition of an increased oxygen demand on the low myocardial blood flow available during CPR probably contributes to ischemia.

The increase in and maintenance of aortic diastolic pressure associated with administration of α-adrenergic agonists during CPR are critical for coronary blood flow and ultimately successful resuscitation. Drugs that are pure α-adrenergic agonist drugs (such as methoxamine and phenylephrine) have been used successfully during CPR. The absence of direct β-adrenergic stimulation avoids an increase in myocardial oxygen uptake, resulting in a more favorable oxygen demand-to-supply ratio in the ischemic heart. These nonepinephrine, α-adrenergic agonists have been reported to be used in successful resuscitation and to maintain myocardial blood flow during CPR as effectively as epinephrine (Redding and Pearson, 1963; Pearson and Redding, 1965, Yakaitis et al., 1979; Schleien et al., 1989). Schleien et al. (1989) found that high aortic pressures can be sustained in a canine model of CPR with phenylephrine, a pure α-adrenergic agonist. The long-standing debate continues about the merits of pure α-adrenergic agonist drugs for resuscitation because of confusion regarding benefit vs. detriment of the β-adrenergic effects of epinephrine (Holmes et al., 1980; Brown et al., 1987a, 1987c).

Effects on Cerebral Blood Flow

During CPR, the generation of cerebral blood flow, similar to coronary blood flow, depends on the vasoconstriction of peripheral vessels, and this vasoconstriction is enhanced by administration of α-adrenergic agonists. Epinephrine and other α-agonist drugs produce selective vasoconstriction of noncerebral peripheral vessels, supplying areas of the head and scalp (i.e., tongue, facial muscle, and skin) without causing cerebral vasoconstriction models of CPR in adults and infants (Koehler et al., 1983; Schleien et al., 1986; Beattie et al., 1991). Infusion of either epinephrine or phenylephrine maintained cerebral blood flow and oxygen uptake at prearrest levels for 20 minutes in a canine model of CPR. There were no differences in neurologic outcome 24 hours after resuscitation when either epinephrine or phenylephrine was administered 9 minutes after VF (Brillman et al., 1985). Other investigators found epinephrine to be more beneficial medication in generating vital organ blood flow (Brown et al., 1986b, 1987a, 1987c). This may have been because of the use of drug dosages that were not equipotent in generating vascular pressure and subsequent blood flow. In addition, epinephrine may have either a vasoconstriction or vasodilation effect on cerebral vessels, depending on the balance between α- and β-adrenergic actions (Winquist et al., 1982).

Cerebral oxygen uptake may be increased by a central β-adrenergic receptor effect if sufficient amounts of epinephrine cross the blood-brain barrier (BBB) during or after resuscitation (Carlsson et al., 1977; MacKenzie et al., 1976). When cerebral ischemia is brief and the BBB remains intact, epinephrine and phenylephrine have similar effects on cerebral blood flow and metabolism (Schleien et al., 1989). Catecholamines may cross the BBB when mechanical disruption occurs or when enzymatic barriers to vasopressors (i.e., monoamine oxidase inhibitors) are overwhelmed during tissue hypoxia (Edvinsson et al., 1978; Lasbennes et al., 1983). During CPR, the BBB may be disrupted by the generation of large fluctuations in cerebral venous and arterial pressures during chest compressions. In addition, permeability of the BBB may increase because of arterial pressure surge that occurs in a maximally dilated vascular bed after resuscitation (Arai et al., 1981). An increase in cerebral oxygen demand when cerebral blood flow is limited could affect cerebral recovery adversely. In an infant model of 8 minutes of cardiac arrest with CPR, disruption of the BBB was present 4 hours after defibrillation (Schleien et al., 1991). In similar protocols involving 8 minutes of cardiac arrest, endothelial vacuolization has been shown, with extravasation of protein through the BBB (Schleien et al., 1992a). These theoretic effects of catecholamines on cerebral circulation need to be further clarified and do not represent a contraindication to administration of epinephrine during cardiac arrest.

Dosage

High-dose epinephrine (0.1 mg/kg) is not recommended for resuscitation because of lack of evidence for benefit over standard dosing (0.01 mg/kg) and concern for harm (AHA, 2006b). Although early animal models of cardiac arrest and clinical studies indicated that high-dose epinephrine may be beneficial through increased cerebral and coronary blood flow (Brillman et al., 1985; Berkowitz et al., 1991; Brown et al., 1986a), other studies in animal models suggested that high-dose epinephrine is associated with a disproportionate rise in myocardial oxygen consumption (Maier et al., 1984; Jackson et al., 1984; Brown et al., 1988a, 1988b; Ditchey and Lindenfeld, 1988). Initial case series in adults reported increased diastolic blood pressure and successful ROSC when high-dose epinephrine was administered (Gonzalez et al., 1988, 1989; Paradis et al., 1990; Martin et al., 1990a; Cipolotti et al., 1991). In a nonrandomized, unblinded study, Goetting and Paradis (1989) reported on seven pediatric patients treated successfully with 0.2 mg/kg of epinephrine; three survived. Several large randomized controlled studies of high-dose and standard-dose epinephrine showed no benefit of high dose on survival or neurologic outcome (Brown et al., 1992; Callaham et al., 1992; Stiell et al., 1992). A prospective, randomized, double-blinded trial in children comparing high-dose epinephrine (0.1 mg/kg) with standard-dose epinephrine (0.01 mg/kg) for inpatient cardiac arrest after failure of initial standard epinephrine dose found that whereas the high-dose and standard-dose arms had equal ROSC (21 out of 34 vs. 20 out of 34), there was significantly better survival at 24 hours (7 out of 34 vs. 1 out of 34, p = 0.05) and discharge (4 out of 34 vs. 0 out of 34) in the standard dose patients (Perondi et al., 2004). Finally, a meta-analysis of these and other randomized, double-blinded studies found that whereas high-dose epinephrine may have benefit for the endpoint of ROSC, there was no improvement in survival to discharge; in fact, there was a trend toward negative impact on this endpoint (Vandycke and Martens, 2000). High-dose epinephrine may account for some of the adverse effects that occur after resuscitation by worsening myocardial ischemia that results in arrhythmias, hypertensive crisis, pulmonary edema, digitalis toxicity, hypoxemia, and cardiac arrest (Brown et al., 1992; Schleien et al., 1992b).

The 2005 AHA guidelines recommend epinephrine at 0.01 mg/kg IV or IO as the first and subsequent doses for pulseless cardiac arrest: asystole, PEA, ventricular tachycardia (VT), and VF. A dosing interval of 3 to 5 minutes is usually recommended; more frequent dosing may result in increased side effects similar to those with the use of high doses. Dosing every 4 minutes is within the recommendations and can be timed with every other 2-minute break for change in compressor and rhythm analysis. When IV or IO access is unavailable, epinephrine may be administered via the ETT at 0.1 mg/kg, although the 2005 guidelines emphasize IV and IO dosing over ETT dosing because of more reliable drug absorption and effect. The high dosage of epinephrine (0.1 mg/kg) may be considered in clinical situations refractory to standard dosing, such as β-blocker or calcium channel overdose, severe anaphylaxis, or septic shock (AHA, 2006b) (Table 38-8).

TABLE 38-8 Epinephrine Administration During CPR

Actions Decreases perfusion to nonvital organs (α-adrenergic effect)
Improves coronary perfusion (aortic diastolic pressure) (α-adrenergic effect)
Increases intensity of ventricular fibrillation (β-adrenergic effect)
Stimulates cardiac contractions (β-adrenergic effect)
Intensifies cardiac contractions (β-adrenergic effect)
Indications Bradyarrhythmia with hemodynamic compromise
Asystole or pulseless arrest
Dosage Bradycardia: 0.0l mg/kg intravenous or intraosseous or 0.1 mg/kg ETT
Repeat every 3 5 min at the same dosage
Pulseless
First dose: 0.01 mg/kg intravenous or intraosseous or 0.1 mg/kg ETT
Repeat every 3-5 min

CPR, Cardiopulmonary resuscitation; ETT, endotracheal tube.

Data from AHA: 2005 American Heart Association (AHA) guidelines for cardiopulmonary resuscitation (CPR) and emergency cardiovascular care (ECC) of pediatric and neonatal patients: pediatric advanced life support. American Heart Association, Pediatrics 117:e1005, 2006b.

Phosphodiesterase Inhibitors

Milrinone is commonly used as an inotrope to support myocardial function during the perioperative period in children undergoing congenital heart surgery and may be useful in the postresuscitation period. The benefits of this agent are: increased inotropy (force of left ventricular contraction), increased dromotropy (speed and efficiency of myocardial conductive pathways) and increased lusitropy (left ventricular diastolic relaxation). There is no effect on chronotropy (rate of contraction); thus, minimal impact on myocardial oxygen consumption and the risk of arrhythmias is low. The side effects of milrinone are predominately thrombocytopenia and a decrease in systemic vascular resistance. Milrinone is usually loaded with a dose of 50 mcg/kg over 30 minutes followed by an infusion of 0.5 to 1 mcg/kg per minute. In an animal model of CPR during VFCA, a loading dose and maintenance infusion of milrinone improved stroke volume and sustained rhythm after arrest (Niemann et al., 2003).

Vasopressin

Vasopressin is a pituitary hormone that binds to specific receptors located throughout the vasculature (V1 receptors) that are responsible for vasoconstriction and in renal tubules (V2 receptors) that facilitate water reabsorption. L-arginine vasopressin is the exogenously administered compound traditionally used to treat diabetes insipidus and gastric hemorrhage. More recent indications for vasopressin include vasoplegic shock and cardiac arrest. Both endogenous and administered vasopressin are cleared and inactivated from plasma during passage through the liver and kidneys. This results in an elimination half-life of about 10 to 20 minutes.

In cardiac arrest, vasopressin has a theoretic advantage compared with epinephrine, because it causes vasoconstriction without adrenergic activity; it does not increase myocardial oxygen demand at a time when oxygen delivery is limited. In addition, vasopressin may result in less ventricular ectopy and tachycardia in the postresuscitation period. These advantages may be offset by intense vasoconstriction after ROSC, potentially worsening myocardial ischemia (Prengel et al., 1996, 1998; Wenzel and Lindner, 2002).

A meta-analysis of animal studies of vasopressin in cardiac arrest found that vasopressin increases ROSC compared with placebo (93% vs. 19%, p < 0.001) or adrenaline (84% vs. 52%, p < 0.001) (Biondi-Zoccai et al., 2003). However, data in humans are not as strong. Wenzel et al. (2004) performed a large randomized trial of vasopressin and epinephrine for the treatment of OHCA and found no difference between the two drugs in hospital admission for patients with VF or PEA but did show improved hospital admission rate and survival discharge in patients with asystole treated with vasopressin. Additionally, there was improved hospital admission (25.7% vs. 16.4%, p = 0.002) and survival to discharge rate (6.2% vs. 1.7%, p = 0.002) among patients treated with vasopressin and then epinephrine vs. epinephrine alone (Wenzel et al., 2004). However, other large, randomized controlled trials have failed to demonstrate the beneficial effect of vasopressin alone or in combination with epinephrine vs. epinephrine alone for survival to discharge (Lindner et al., 1997; Stiell et al., 2001; Callaway et al., 2006; Gueugniaud et al., 2008).

The pediatric literature concerning the use of vasopressin during CPR is limited. A pediatric animal model of asphyxia cardiac arrest found that ROSC was significantly more likely in animals treated with epinephrine than with vasopressin (Voelckel et al., 2000). In a retrospective review, Mann showed that four of six children experiencing cardiac arrest had ROSC after administration of vasopressin (0.4 units/kg). Two patients survived to 24 hours, and one patient survived to discharge (Mann et al., 2002). An additional case series reported the use of terlipressin, a vasopressin analogue in seven children with asystole. ROSC was achieved in five children, and four children survived to discharge (Matok et al., 2007). A retrospective review of the National Registry of CPR (NRCPR) database of vasopressin use during pediatric IHCA from 1999 to 2004 showed only 5% of children received vasopressin during the management of cardiac arrest. Children who received vasopressin had a longer duration of cardiac arrest than those who did not (median 37 vs. 24 minutes, p = 0.004) and were more commonly in an intensive care setting (77%). After multivariate analysis, vasopressin was associated with worse ROSC and no difference in 24-hour or discharge survival (Duncan et al., 2009). Thus, the role of vasopressin in pediatric cardiac arrest is indeterminate and requires further study.

Antiarrhythmic Drugs

Atropine

Atropine is a parasympatholytic agent that reduces vagal tone to the heart, resulting in an increased discharge rate of the sinus node, enhanced atrioventricular conduction, and activated latent ectopic pacemakers (Gillette and Garson, 1981). Atropine has minimal effects on systemic vascular resistance, myocardial perfusion, and myocardial contractility (Gilman et al., 1990).

Indications

Atropine is indicated for treatment of bradycardia associated with hypotension, second- and third-degree heart block, and slow idioventricular rhythms (Goldberg, 1974; Scheinman et al., 1975). Atropine is a useful drug for clinical states associated with excessive parasympathetic tone. Pediatric patients who experience cardiac arrest commonly have bradycardia or asystole as initial rhythms, making atropine a first-line drug for such patients. During the perioperative period, laryngoscopy or manipulation of viscera may result in severe bradycardia or even asystole secondary to enhanced parasympathetic tone, particularly in infants. Bradycardia as a result of the oculocardiac reflex during ophthalmologic surgery can occur in a child of any age. Although the first line of treatment is cessation of the precipitating stimuli, atropine has been shown to be helpful when given intravenously or intraglossally (Arnold et al., 2002)

Dosage

The pediatric dose for atropine is 0.02 mg/kg, with a minimal dose of 0.1 mg and a maximal total dose of 1 mg. The minimal dose is recommended because of the potential for paradoxical bradycardia caused by a primarily central stimulating effect on the medullary vagal nuclei with low doses (Kottmeier et al., 1968). Atropine may be given via many routes: IV, endotracheal, IO, intraglossal, intramuscular, or subcutaneous. However, the intramuscular and subcutaneous routes may not have adequate perfusion and absorption during cardiac arrest or CPR. Onset of action occurs within 30 seconds, and peak effect occurs 1 to 2 minutes after an IV dose. The adult dose of atropine is 0.5 mg IV given every 5 minutes until a desired heart rate is obtained or to a maximal dose of 2 mg. Full vagal blockade occurs in adults who receive a dose of 2 mg. Dosages larger than recommended may be required in special circumstances, such as organophosphate poisoning or nerve gas exposure.

Adverse Effects

Atropine should not be used in patients in whom tachycardia is undesirable. After myocardial infarction or ischemia with persistent bradycardia, atropine should be administered in the lowest dose possible that increases heart rate. Tachycardia, which increases myocardial oxygen consumption and can lead to VF, can occur after large doses of atropine in patients with myocardial ischemia. Caution should also be used when administering atropine to patients with pulmonary or systemic outflow tract obstruction or idiopathic hypertrophic subaortic stenosis, because tachycardia can decrease ventricular filling and lower cardiac output (Table 38-10). Electrical pacing may be a safer means of maintaining a desired heart rate in these patients.

TABLE 38-10 First Line Antiarrhythmic Administration During CPR

Atropine  
Indications Symptomatic bradycardia with AV node block
Vagal bradycardia during intubation attempts
After epinephrine for bradycardia with poor perfusion
Dosage 0.02 mg/kg IV or intraosseous after ensuring oxygenation (2.5 times dose if given ETT)
Repeat every 3-5 min at the same dose
Maximum single dose 0.5 mg in a child and 1.0 mg in an adolescent
Maximum total dose 1.0 mg in a child and 2.0 mg in an adolescent
Adenosine  
Indications First line after vagal maneuvers fail for supraventricular tachycardia
Dosage First dose, 0.1 mg/kg rapid IV bolus; second dose, increase to 0.2 mg/kg rapid IV bolus (maximum single dose: 12 mg)
Note: must be followed with 0.5-1 mL/kg normal saline flush over 1-2 seconds to have effect.
Amiodarone  
Indication Supra-ventricular and ventricular tachyarrhythmias
Dosage 5 mg/kg IV over 30 minutes (push if pulseless).

CPR, Cardiopulmonary resuscitation; AV, atrioventricular; ETT, endotracheal tube; IV, intravenous.

Data from American Heart Association.

Adenosine

Adenosine is a purine nucleoside that is a first line treatment for supraventricular tachycardia (SVT) for children and adults. Adenosine acts by binding directly to adenosine receptors in the myocardium and peripheral vasculature. Receptor binding initiates intracellular signaling via G proteins and results in prolonged AV-node refractory period and slowed conduction. This action of adenosine breaks the reentrant circuit responsible for most SVT (Crosson et al., 1994).

Indications

Treatment of narrow complex QRS tachyarrhythmia (fewer than 0.08 seconds) with adenosine results in conversion to sinus rhythm in 72% to 77% of patients with few side effects (Till et al., 1989; Losek et al., 1999). Adenosine can be used diagnostically to differentiate between VT and SVT, because the temporary AV block allows observation of isolated atrial node electric activity. The half-life is less than 10 seconds because of rapid uptake by red blood cells and endothelial cells and metabolism by adenosine deaminase on the red-cell surface. Adenosine is completely cleared from the plasma in less than 30 seconds, giving it rapid onset and short duration of action (Losek et al., 1999).

Amiodarone

Amiodarone hydrochloride is a diiodinated benzofuran derivative containing a diethylated tertiary amine chain. It is strongly lipophilic and has extensive tissue distribution. The drug is metabolized by the liver with mainly bile elimination; there is little renal elimination. Amiodarone has a long elimination half-life that ranges from 20 to 47 days (Chow, 1996). Amiodarone has pharmacologic effects of all four antiarrhythmic classes (Singh et al., 1989). It blocks potassium channels, blocks inward sodium current, is a noncompetitive β-blocker, and has calcium-channel blocking properties. Interestingly, its major electrophysiologic effect is dependent on the route (and duration) of administration (Bauman, 1997). With long-term oral treatment, amiodarone’s predominant activity is to increase the duration of the action potential in most cardiac tissue, a class III effect. When used intravenously, amiodarone increases AV node refractoriness and intranodal conduction interval time, a class II antiadrenergic effect, or a calcium-channel blocker effect (Nattel, 1993). Additionally, amiodarone causes both coronary and systemic vasodilation (Coté et al., 1979). It does have phosphodiesterase inhibition and is a selective inhibitor of thyroid hormone metabolism (Singh et al., 1989; Harris et al., 1993).

Indications

Amiodarone has been studied as both a prophylactic long-term medication for patients with high arrhythmogenic potential caused by organic heart disease and for use in acute life-threatening arrhythmias. Amiodarone has been shown to be most effective for VT or VF when compared with lidocaine and bretylium in over 15 adult studies (Bauman et al., 1987; Helmy et al., 1988; Roberts et al., 1994; Podrid, 1995; Chow, 1996). When IV amiodarone was compared with placebo in a randomized trial (ARREST trial), there was significant improvement in the number of patients surviving to the emergency department after OHCA (Gonzalez et al., 1998). Amiodarone was shown to improve survival to admission when given to adults with OHCA and shock-resistant VF (Kudenchuk et al., 1999). A study comparing the efficacy of lidocaine to amiodarone for shock-resistant VF in OHCA demonstrated a 15% vs. 27% survival of adult patients to admission (Dorian et al., 2002). These adult studies support the superior performance of amiodarone for ventricular arrhythmias.

Amiodarone has been studied in children with generally favorable outcome. Perry et al. (1993) showed arrhythmia resolution in six of 10 children (mean age of 6.8 years) who had not responded to multiple other antiarrhythmic drugs. Figa et al. (1994) studied 30 infants and children with life-threatening arrhythmias, including SVT and VT and showed amiodarone eliminated arrhythmias in 71% of patients; an additional 23% experienced a significant improvement in clinical status and rhythm. Burri et al. (2003) treated 23 infants with hemodynamically unstable tachycardias with amiodarone; dosages ranged from 5 to 26 mcg/kg per minute (mean dosage of 15 mcg/kg per minute). He found only one infant to be unresponsive and adverse effects in four infants. A review of amiodarone use entered in the NRCPR database from 2000 to 2005 revealed that approximately 20% of children with VF while hospitalized and pulseless VT received amiodarone (October et al., 2008). The 2005 AHA guidelines recommend amiodarone for treatment of VF or VT without a pulse. It may also be considered in the treatment of stable SVT and VT.

Dosage

There are limited data of amiodarone pharmacokinetics in children. IV administration for active arrhythmias is common practice, and it is often followed by a continuous infusion or transition to oral medication if ongoing treatment is indicated. An initial IV dose of 5 mg/kg may be followed by additional doses or a continuous infusion of 5 mcg/kg per minute. Increases in the infusion can occur up to maximum of 10 mcg/kg per minute or 20 mg/kg per 24 hours (Perry et al., 1996). Caution should be taken in the rate of administration of amiodarone, because cardiovascular collapse can occur with rapid administration, particularly in the patient with arrhythmias who may already have hemodynamic instability. For patients in a state of cardiac arrest, amiodarone is administered via bolus. For patients not in a state of cardiac arrest, administration should be over 30 to 60 minutes to avoid further hemodynamic instability. Pretreatment with calcium may help prevent hypotension during administration, especially if the patient is hypocalcemic.

Adverse Effects

All of the adverse effects of amiodarone appear to be less common at lower dosages (Singh, 1996). Cardiovascular effects appear to be the most common and include hypotension caused by acute vasodilation and negative inotropic effects. Bradyarrhythmias, congestive heart failure, cardiac arrest, and VT have all been reported. Proarrhythmias, although possible, are seen less often than with other class III antiarrhythmics. The incidence is thought to be approximately 2%. Torsades de pointes occurs in one third of these cases (Perry et al., 1993). The most common noncardiovascular toxicities are pulmonary complications. Interstitial pneumonitis is the most common, usually associated with long-term oral treatment. A hypersensitivity pneumonitis can occur early in the course of treatment. Symptoms include cough, low-grade fever, dyspnea, weight loss, respiratory associated chest pain, and bilateral interstitial infiltrates. These symptoms are usually reversible on cessation of the drug (Jessurun et al., 1998). Hepatotoxicity can occur and is more common with oral use. Thyroid dysfunction may occur in as many as 10% of patients, resulting in either hypothyroidism or hyperthyroidism. Optic neuritis or neuropathy resulting in decreased acuity or blurred vision can progress to permanent blindness. Neurologic symptoms include ataxia, tremor, peripheral neuropathy, malaise or fatigue, sleep disturbance, dizziness, and headache. Dermatologic reactions include allergic rash, photosensitivity, and blue-gray skin discoloration (Hilleman et al., 1998) (Table 38-10).

Lidocaine

Lidocaine, a class IB antiarrhythmic, depresses the fast inward sodium channel, which results in an increased refractory period and shortening of the total action potential. The drug is metabolized primarily in the liver by the microsomal enzyme system (Collingsworth et al., 1974). Up to 10% of lidocaine is excreted unchanged in the urine. The amount excreted unchanged increases in acidic urine. There is no biliary excretion or intestinal absorption in humans.

During CPR, lidocaine clearance is decreased because of inherent decrease in cardiac output and low hepatic blood flow. During conventional CPR in dogs who had a blood pressure of 20% of control values, an IV lidocaine bolus of 2 mg/kg resulted in elevated blood and tissue concentrations. Lidocaine distribution, which is usually complete in 20 minutes, was still not complete after 1 hour. Lidocaine clearance and distribution may also be altered as a result of changes in protein binding and metabolism during CPR (Chow et al., 1983). In humans, high peak blood and tissue concentrations of lidocaine occur during CPR, with a delay in time to peak concentration. Comparison of peripheral, central, and IO routes of administration of lidocaine during open-chest CPR in dogs showed no difference in time to peak serum concentration (Chow et al., 1981).

Hemodynamic Effects

In animal models, rapid IV delivery of lidocaine causes a decrease in stroke work, blood pressure, systemic vascular resistance, left ventricular contractility, and a slight increase in heart rate (Austen and Moran, 1965; Constantino et al., 1967). In healthy adults, the drug does not appear to cause any change in heart rate or blood pressure, but patients with cardiac disease have a slight decrease in ventricular function (Jewitt et al., 1968; Schumacher et al., 1969). In most patients, even in those who have sustained a recent myocardial infarction, a 1- to 2-mg/kg bolus of lidocaine does not alter cardiac output, heart rate, or blood pressure (Jewitt et al., 1968). Excessive doses of lidocaine given by rapid infusion may decrease cardiac function in patients with cardiac disease, especially in those suffering an acute myocardial infarction. Therefore, slow IV administration, no faster than 50 to 100 mg/min in adults, is recommended (Collingsworth et al., 1974).

Dosage

To achieve and maintain therapeutic levels of lidocaine, a bolus dose should be given at the initiation of a constant infusion. In patients with normal cardiac and hepatic function, an initial IV bolus of 1 mg/kg lidocaine is given, followed by a constant infusion at a rate of 20 to 50 mcg/kg per minute. If the arrhythmia recurs, a second bolus of the same dose can be given (Greenblatt et al., 1976). When a bolus administration is used without an infusion, the ventricular arrhythmias often return within 15 to 20 minutes because of its rapid clearance (Bartlett et al., 1984). If an infusion is begun without an initial bolus, approximately five half-lives are required to approach a plateau serum concentration (half-life of 108 minutes) (Collingsworth et al., 1974).

Patients with severe diminution of cardiac output should receive a bolus no greater than 0.75 mg/kg followed by an infusion at a rate of 10 to 20 mcg/kg per minute. In patients with hepatic disease, dosages should be decreased to 50% of normal. Patients with chronic renal disease who are receiving treatment through hemodialysis have normal lidocaine pharmacokinetics.

Drug interactions with lidocaine are common. Phenobarbital increases lidocaine metabolism, requiring increased doses. Isoniazid and chloramphenicol decrease lidocaine metabolism, so a decreased dosage should be used. Any drug that decreases cardiac output increases the serum concentration of lidocaine, and drugs that increase cardiac output and hepatic blood flow cause the serum concentration to be lower than predicted.

Adverse Effects

Lidocaine toxicity with a serum concentration of greater than 7 to 8 mcg/mL occurs most commonly in patients with severe hepatic disease or severe congestive heart failure. Decreased cardiac output results in decreased hepatic blood flow, which leads to decreased lidocaine clearance.

The toxic effects of lidocaine generally involve the central nervous system and include seizures, psychosis, drowsiness, paresthesias, disorientation, muscle twitching, agitation, and respiratory arrest. Treatment for seizures and psychosis includes benzodiazepines or barbiturates. True allergic reactions to lidocaine are rare. Cardiovascular side effects (discussed previously) are usually observed in patients whose myocardial function is already decreased. Conversion of second-degree to complete heart block has been described (Lichstein et al., 1973). Further slowing of sinus bradycardia has also been observed. These effects are uncommon and occur with large-dose administration. These potential side effects do not prohibit the use of lidocaine in these patients (Table 38-11).

TABLE 38-11 Second-Line Antiarrhythmic Administration During CPR

Lidocaine
Indications Ventricular arrhythmias (not ventricular escape rhythm)
Suppress ventricular ectopy
Raise threshold for fibrillation
Dose 1 mg/kg intravenous or intraosseous bolus (2.5 times dose if ETT)
30-50 mcg/kg/min intravenous or intraosseous infusion
Reduce infusion rate if low cardiac output or liver failure
Magnesium
Indications Torsades de pointes
  Hypomagnesemia
Dose 25-50 mg/kg intravenous or intraosseous (maximum: 2 g/dose)

CPR, Cardiopulmonary resuscitation, ETT, endotracheal tube.

Data from AHA: 2005 American Heart Association (AHA) guidelines for cardiopulmonary resuscitation (CPR) and emergency cardiovascular care (ECC) of pediatric and neonatal patients: pediatric advanced life support. American Heart Association, Pediatrics 117:e1005, 2006b.

Magnesium

Only two clinical scenarios are indications for emergent magnesium therapy in children: hypomagnesemia and polymorphic VT (torsades de pointes VT). Magnesium is an intracellular cation with less than 1% of the body’s store available in the serum. The ionized fraction is physiologically active, much like calcium, and serves as a cofactor in enzymatic reactions. Low serum magnesium levels often develop in critically ill patients and patients who have had CPB surgery.

Magnesium has been shown to be effective in children with torsade de pointes VT that is associated with acquired or congenital long QT interval (Tzivoni et al., 1988; Hoshino et al., 2006). Other situations such as myocardial ischemia, premature ventricular contractions, and atrial arrhythmias have been studied; however, the benefits of magnesium administration are controversial. Treatment with magnesium (whether by bolus or infusion) prevented falls in magnesium levels and resulted in a lower incidence of hemodynamically unstable arrhythmias than was found in a placebo group in children after surgery for congenital heart disease (Dorman et al., 2000; Dittrich et al., 2003). However, amiodarone may be more effective in dealing with these same postoperative arrhythmias, and other studies have not found an association between hypomagnesemia and postoperative arrhythmias (Hoffman et al., 2002; Batra et al., 2006a). The exact mechanism of magnesium on the conduction pathways of the heart is not known. Studies have demonstrated antagonism of calcium channels. Such antagonism of calcium has been shown to block the rise of intracellular calcium during periods of hypoxia.

Other Drugs

Sodium Bicarbonate

Sodium bicarbonate causes an acid-base reaction in which bicarbonate combines with hydrogen ion to form water and carbon dioxide, resulting in an elevated blood pH:

image

Because sodium bicarbonate generates CO2, adequate alveolar ventilation must be present before its administration. As respiratory failure is the leading cause of cardiac arrest in children, caution should be taken before sodium bicarbonate administration in the face of preexisting respiratory acidosis. Sodium bicarbonate use during CPR is one of the most controversial issues in the literature related to cardiac arrest. This stems from lack of evidence of benefit during CPR in animals and humans, as well as the potential adverse effects associated with sodium bicarbonate administration.

Literature on sodium bicarbonate use in CPR dates back to the 1960s, but there are little data demonstrating a beneficial impact on human survival (Levy, 1998). In animal models of resuscitation from cardiac arrest, sodium bicarbonate has been associated with increased survival in few studies and with no difference in survival in many studies (Andersen et al., 1967; Redding and Pearson, 1968; Kirimli et al., 1969; Lathers et al., 1989; Bleske et al., 1992; Neumar et al., 1995; Vukmir et al., 1995). Administration of sodium bicarbonate to humans experiencing cardiopulmonary arrest has been associated with increased mortality in retrospective reviews and nonblinded prospective studies (Suljaga-Pechtel et al., 1984; Skovron et al., 1985; Delooz and Lewi, 1989).

Several studies in both humans and animals document deleterious effects on physiologic endpoints such as myocardial performance, arterial blood pressure, and partial pressure of venous CO2 (Pco2) after sodium bicarbonate administration during CPR (Wang and Katz, 1965; Bishop and Weisfeldt, 1976; Weil et al., 1985; Adrogue et al., 1989; Kette et al., 1991; Eleff et al., 1995). This literature is difficult to interpret because of large variations in dosage, timing, blood sampling, and testing conditions. Animal data and a retrospective review of OHCA in adults indicate improved outcomes in cardiac arrest when sodium bicarbonate is administered early in cardiac arrest (Vukmir et al., 1995; Bar-Joseph et al., 1998; 2005; Leong et al., 2001). Dybvik et al. (1995) conducted the only randomized control trial of sodium bicarbonate administration in humans during CPR. Researchers found no survival benefit, but the study may have been underpowered because survival from OHCA is low and a large cohort would be required to detect a difference. Overall, there are insufficient data to assess the impact of sodium bicarbonate administration on survival during CPR.

Dosage

When the partial pressure of arterial CO2 (Paco2) and pH are known, the dose of sodium bicarbonate needed to correct the pH to 7.40 can be calculated from the following formula:

image

Because of the possible side effects of sodium bicarbonate and the large arterial-to-venous carbon dioxide gradient that develops during CPR, giving half the dose based on a volume of distribution of 0.6 is recommended. If blood gases are not available, an initial dose of 1 mEq/kg, followed by 0.5 mEq/kg every 10 minutes of ongoing arrest has been proposed (Martinez et al., 1979). Adequate alveolar ventilation is important to eliminate the CO2 produced during bicarbonate administration (Table 38-12).

TABLE 38-12 Sodium Bicarbonate Administration During CPR

Indications Hyperkalemia
Preexisting metabolic acidosis
Long CPR time without blood-gas availability
Pulmonary hypertensive crisis
Dosage 1 mEq/kg intravenous or intraosseous empirically, or calculated from base deficit
Ensure adequate ventilation when administering bicarbonate
Complications Metabolic alkalosis
Impairs O2 delivery by shift of oxyhemoglobin dissociation
Decreases cardiac contractility
Increases possibility for fibrillation
Decreases plasma K+ and Ca2+ by intracellular shift
  Hypernatremia
  Hyperosmolarity
  Hypercapnia
  Paradoxical intracellular acidosis

CPR, Cardiopulmonary resuscitation; O2, oxygen; K+, potassium; Ca2+, calcium.

Adverse Effects

Multiple adverse effects occur with administration of sodium bicarbonate, including metabolic alkalosis, hypernatremia, hypercapnia, and hyperosmolarity; all are associated with an increased mortality rate (Mattar et al., 1974; Worthley, 1976). Metabolic alkalosis causes a leftward shift of the oxyhemoglobin dissociation curve that impairs release of oxygen from hemoglobin to tissues at a time of low cardiac output and low oxygen delivery (Bishop and Weisfeldt, 1976). Hypernatremia and hyperosmolarity may decrease organ perfusion by increasing interstitial edema in microvascular beds.

Various theoretic adverse effects are also created by administration of sodium bicarbonate. The marked hypercapnic acidosis in both systemic venous and coronary sinus blood that develops during cardiac arrest may be worsened by administration of sodium bicarbonate (Grundler et al., 1986; Weil et al., 1986). Hypercapnic acidosis in the coronary sinus may cause decreased myocardial contractility (Pannier and Leusen, 1968; Cingolani et al., 1970; Deshmukh et al., 1986). Falk et al. (1988) measured the mean venoarterial difference of Paco2 as 23.8 + 15.1 mm Hg in five patients during CPR. In one patient, the difference increased from 16 mm Hg to 69 mm Hg after administration of sodium bicarbonate.

An additional theoretic concern is intracellular acidosis from CO2 diffusion across cell membranes despite increased serum pH with sodium bicarbonate administration. In the central nervous system, intracellular acidosis probably does not occur unless overcorrection of the pH occurs. After administration of two doses of bicarbonate of 5 mEq/kg to neonatal rabbits recovering from hypoxic acidosis, the arterial pH increased to 7.41 and the intracellular brain pH increased to prehypoxic levels (Sessler et al., 1987). A paradoxical intracellular acidosis did not develop. In a study in rats the intracellular brain adenosine triphosphate (ATP) concentration did not change during 70 minutes of extreme hypercarbia, despite a decrease in the intracellular brain pH to 6.5 (Cohen et al., 1990b). After hypercarbia, these animals could not be distinguished from normal controls, and their brains were not morphologically different from those of control animals. Eleff et al. (1995) used magnetic resonance spectroscopy to measure cerebral pH during cardiac arrest and CPR in a dog model where CPP was maintained and Pco2 was normalized with controlled ventilation. In that model, cerebral pH paralleled blood pH; administration of sodium bicarbonate did not cause a paradoxical intracerebral acidosis and instead prevented cerebral acidosis that occurred in the control group.

Calcium

The calcium ion is essential in myocardial excitation-contraction coupling and myocardial contractility, and it enhances ventricular automaticity during asystole (Greenblatt et al., 1976). Ionized hypocalcemia leads to decreased ventricular performance, peripheral vasodilation, and blunting of the hemodynamic response to catecholamines (Bristow et al., 1977; Scheidegger et al., 1977; Drop and Scheidegger, 1980; Marquez et al., 1986; Urban et al., 1986).

Based on its role in myocardial function, calcium should be of benefit in cardiac arrest, particularly PEA and asystole. However, limited retrospective and prospective adult studies have failed to demonstrate a benefit of calcium in these conditions (Harrison and Amey, 1983; Stueven et al., 1983, 1985a, 1985b). As a result of this data and others like it, the AHA’s 2000 guidelines limited recommendations for calcium administration to cardiac arrest associated with electrolyte abnormalities and toxic ingestions (Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, 2000).

Calcium administration during CPR has been associated with poor survival and neurologic outcomes in pediatric patients. In a single center trial, de Mos et al. (2006) reviewed 91 cardiac arrests in critically ill children over 5.5 years. In a multivariate analysis, patients who received one or more calcium boluses during cardiac arrest were 5.4 times more likely to suffer hospital mortality. In a large review of pediatric IHCA in the NRCPR database, Srinivasan et al. (2008) found that calcium was used in 45% of CPR events, despite guidelines limiting calcium use in CPR. After controlling for confounding variables, calcium administration was independently associated with decreased survival to discharge and poor neurologic outcome. However, for children with electrolyte- or toxin-associated cardiac arrest, calcium administration was not associated with worse event-survival or survival to discharge.

Neonates have low intracellular calcium stores and are more dependent on serum calcium levels. Calcium administration to this population, particularly after cardiac surgery, theoretically seems indicated and has been postulated to be beneficial (Peddy et al., 2007). Srinivasan et al. (2008) analyzed calcium administration during cardiac arrest in infants after cardiac surgery as part of a large review of NRCPR (spanning 2000 to 2004); after adjusting for confounding variables, calcium administration was associated with worse event-survival but not reduced rates of survival to discharge or unfavorable neurologic outcome. However, adjusted odds ratios were low (0.4 to 0.6), and this was a subgroup analysis only.

Calcium’s association with poor outcomes may be related to its role in cellular apoptosis. In the setting of ischemia-reperfusion injury, calcium administration may worsen postischemic hypoperfusion and hasten development of intracellular events that lead to cell death. Intracellular calcium overload occurs in many pathologic conditions, including ischemia, and may be a part of the common pathway of cell death (Katz and Reuter, 1979; White et al., 1983).

Indications

The few firm indications for calcium use during CPR include cardiac arrest secondary to total or ionized hypocalcemia, hyperkalemia, hypermagnesemia, or an overdose of a calcium channel blocker (AHA, 2006b). Hypocalcemia occurs with a number of conditions that predispose to low total-body calcium stores, including the long-term use of loop diuretics. Ionized hypocalcemia may coexist with a normal total plasma calcium concentration. This occurs in the presence of severe alkalosis, which may be seen in the operating room secondary to iatrogenic hyperventilation. Ionized hypocalcemia also follows massive or rapid transfusion of citrated blood products into patients during surgery. The degree of hypocalcemia caused by citrated products depends on the rate of administration, the total dose, and the hepatic and renal function of the patient. Administration of 2 mL/kg per minute of citrated whole blood causes a significant but transient decrease in the ionized calcium in patients who have been anesthetized (Denlinger et al., 1976).

Intraoperative cardiac arrests are more likely to be caused by electrolyte abnormalities than pediatric cardiac arrests in other situations. Electrolyte imbalance, particularly hyperkalemia, caused 5% of pediatric perioperative cardiac arrests in the 2007 review of the POCA registry (Bhananker et al., 2007). Despite the limited recommendations for calcium during CPR, intraoperative cardiac arrest is more likely to have a cause for which calcium administration is beneficial.

Dosage

The dosage of calcium chloride solution is 20 mg/kg. Calcium gluconate is as effective as calcium chloride in raising ionized calcium concentration during CPR (Heining et al., 1984). However, calcium chloride is more effective than calcium gluconate in supporting blood pressure in the hypotensive child (Broner et al., 1990). Calcium gluconate can be given as a dose of 30 to 100 mg/kg, with a maximum dosage of 2 g in pediatric patients (Table 38-13). Equally rapid increases in ionized calcium levels seen in patients with anhepatic jaundice after administration of calcium chloride and gluconate suggest that hepatic function is not necessary for either drug to be effective (Martin et al., 1990a).

TABLE 38-13 Calcium Chloride Administration During CPR

Indications Hyperkalemia
  Hypocalcemia
  Hypermagnesemia
  Calcium channel blocker overdose
Dosage 20 mg/kg intravenously or intraosseously

CPR, Cardiopulmonary resuscitation.

Glucose

Glucose administration during and after CPR should be restricted to documented hypoglycemia because of the detrimental effects of hyperglycemia during brain ischemia. Myers (1979) first hypothesized that hyperglycemia worsens the neurologic outcome after cardiac arrest. Siemkowicz and Hansen (1978) confirmed this finding when they found that after 10 minutes of global brain ischemia, neurologic recovery of hyperglycemic rats was worse than in normoglycemic control animals. Hyperglycemia exaggerates ischemic neurologic injury by increasing lactic acid production in the brain by anaerobic metabolism. During ischemia under normoglycemic conditions, brain lactate concentration plateaus. However, when hyperglycemia is present, lactate concentration in the brain continues to rise for the duration of the ischemic period (Siesjo, 1984). The severity of intracellular acidosis during brain ischemia is directly proportional to the preischemic plasma glucose concentration.

Clinical studies have shown a direct correlation between the initial glucose concentration after cardiac arrest and a poor neurologic outcome (Pulsinelli et al., 1983; Longstreth and Inui, 1984; Woo et al., 1988; Ashwal et al., 1990). Longstreth et al. (1986) suggested that a higher plasma glucose concentration at admission may be an endogenous response to severe stress and not the cause of more severe brain injury. Losert et al. (2008) retrospectively examined blood glucose levels in adults 12 hours after ROSC and found that after controlling for confounding variables, normoglycemia and even mild hyperglycemia were associated with survival 6 months after arrest and good neurologic outcome. They concluded that glucose control goals after cardiac arrest need not be strict normoglycemia. Despite a lack of evidence in children, given the likelihood of additional ischemic events during the postresuscitation period, it seems warranted to maintain serum glucose in the normal range.

Voll and Auer (1988) showed that administration of insulin to hyperglycemic rats after global brain ischemia improved the neurologic outcome. Similarly, Katz et al. (1998) found that insulin and glucose administration after asphyxial cardiac arrest in rats improved neurologic outcome and histologic findings; the combination of insulin and glucose had superior outcomes as compared with either drug individually or saline placebo. The effect of insulin may be independent of its glucose-lowering properties, because an additional study by Voll and Auer (1991) found that normoglycemic-treated rats had a better outcome than placebo-treated controls. Infants, patients with hepatic disease, and debilitated patients with low endogenous glycogen stores are prone to hypoglycemia when energy requirements rise. In these patients, bedside monitoring of serum glucose level is critical during the perioperative period and cardiac arrest episodes.