Intraoperative Transesophageal Echocardiography

Published on 24/02/2015 by admin

Filed under Anesthesiology

Last modified 24/02/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1456 times

12 Intraoperative Transesophageal Echocardiography

Key points

Few areas in cardiac anesthesia have developed as rapidly as the field of intraoperative echocardiography. In the early 1980s, when transesophageal echocardiography (TEE) was first used in the operating room, its main application was the assessment of global and regional left ventricular (LV) function. Since that time, there have been numerous technical advances: biplane and multiplane probes; multifrequency probes; enhanced scanning resolution; color–flow Doppler (CFD), pulsed-wave (PW) Doppler, and continuous-wave (CW) Doppler; automatic edge detection; Doppler tissue imaging (DTI); three-dimensional (3D) reconstruction; and digital image processing. With these advances, the number of clinical applications of TEE has increased markedly. The common applications of TEE include: (1) assessment of valvular anatomy and function, (2) evaluation of the thoracic aorta, (3) detection of intracardiac defects, (4) detection of intracardiac masses, (5) evaluation of pericardial effusions, (6) detection of intracardiac air and clots, (7) assessment of biventricular systolic and diastolic function, and (8) evaluation of myocardial ischemia and regional wall motion abnormalities (RWMAs). In many of these evaluations, TEE is able to provide unique and critical information that previously was not available in the operating room (Box 12-1).

Basic concepts

image

Properties of Ultrasound

In echocardiography, the heart and great vessels are insonated with ultrasound, which is sound above the human audible range. The ultrasound is sent into the thoracic cavity and is partially reflected by the cardiac structures. From these reflections, distance, velocity, and density of objects within the chest are derived.

An ultrasound beam is a continuous or intermittent train of sound waves emitted by a transducer or wave generator. It is composed of density or pressure waves and can exist in any medium with the exception of a vacuum (Figure 12-1). Ultrasound waves are characterized by their wavelength, frequency, and velocity.1 Wavelength is the distance between the two nearest points of equal pressure or density in an ultrasound beam, and velocity is the speed at which the waves propagate through a medium. As the waves travel past any fixed point in an ultrasound beam, the pressure cycles regularly and continuously between a high and a low value. The number of cycles per second (Hertz) is called the frequency of the wave. Ultrasound is sound with frequencies above 20,000 Hz, which is the upper limit of the human audible range. The relationship among the frequency (f), wavelength (λ), and velocity (v) of a sound wave is defined by the following formula:

image

Figure 12-1 A sound wave is a series of compressions and rarefactions.

The combination of one compression and one rarefaction represents one cycle. The distance between the onset (peak compression) of one cycle and the onset of the next is the wavelength.

(From Thys DM, Hillel Z: How it works: Basic concepts in echocardiography. In Bruijn NP, Clements F [eds]: Intraoperative use of echocardiography. Philadelphia: JB Lippincott, 1991.)

image

The velocity of sound varies with the properties of the medium through which it travels. In low-density gases, molecules must transverse long distances before encountering the adjacent molecules, so ultrasound velocity is relatively slow. In contrast, in solid, where molecules are constrained, ultrasound velocity is relatively high. For soft tissues, this velocity approximates 1540 m/sec but varies from 1475 to 1620 m/sec. In comparison, the velocity of ultrasound in air is 330 m/sec and 3360 m/sec in bone. Because the frequency of an ultrasound beam is determined by the properties of the emitting transducer, and the velocity through soft tissue is approximately constant, wavelengths are inversely proportional to the ultrasound frequency.

Ultrasound waves transport energy through a given medium; the rate of energy transport is expressed as “power,” which is usually expressed in joules per second or watts.1 Because medical ultrasound usually is concentrated in a small area, the strength of the beam usually is expressed as power per unit area or “intensity.” In most circumstances, intensity usually is expressed with respect to a standard intensity. For example, the intensity of the original ultrasound signal may be compared with the reflected signal. Because ultrasound amplitudes may vary by a factor of 105 or greater, amplitudes usually are expressed using a logarithmic scale. The usual unit for intensity comparisons is the decibel, which is defined as:

image

where I1 is the intensity of the wave to be compared and I0 is the intensity of the reference waves.

Notably, positive values imply a wave of greater intensity than the reference wave, and negative values indicate a lower intensity. Increasing the wave’s intensity by a factor of 10 adds 10 dB to the decibel measurement and doubling the intensity adds 3 dB.

image

Ultrasound Beam

Piezoelectric crystals convert between ultrasound and electrical signals. Most piezoelectric crystals that are used in clinical applications are the man-made ceramic ferroelectrics, the most common of which are barium titanate, lead metaniobate, and lead zirconate titanate. When presented with a high-frequency electrical signal, these crystals produced ultrasound energy; conversely, when they are presented with an ultrasonic vibration, they produce an electrical alternating current signal. Commonly, a short ultrasound signal is emitted from the piezoelectric crystal, which is directed toward the areas to be imaged. This pulse duration is typically 1 to 2 microseconds. After ultrasound wave formation, the crystal “listens” for the returning echoes for a given period and then pauses before repeating this cycle. This cycle length is known as the “pulse repetition frequency” (PRF). This cycle length must be long enough to provide enough time for a signal to travel to and return from a given object of interest. Typically, PRF varies from 1 to 10 kHz, which results in 0.1 to 1 millisecond between pulses. When reflected ultrasound waves return to these piezoelectric crystals, they are converted into electrical signals, which may be appropriately processed and displayed. Electronic circuits measure the time delay between the emitted and received echo. Because the speed of ultrasound through tissue is a constant, this time delay may be converted into the precise distance between the transducer and tissue. The amplitude or strength of the returning ultrasound signal provides information about the characteristics of the insonated tissue.

The 3D shape of the ultrasound beam is dependent on both physical aspects of the ultrasound signal and the design of the transducer. An unfocused ultrasound beam may be thought of as an inverted funnel, where the initial straight columnar area is known as the “near field” (also known as Fresnel zone) followed by a conical divergent area known as the “far field” (also known as Fraunhofer zone). The length of the “near field” is directly proportional to the square of the transducer diameter and inversely proportional to the wavelength; specifically,

image

where Fn is the near-field length, D is the diameter of the transducer, and λ is the ultrasound wavelength. Increasing the frequency of the ultrasound increases the length of the near field. In this near field, most energy is confined to a beam width no greater than the transducer diameter. Long Fresnel zones are preferred with medical ultrasonography, which may be achieved with large-diameter transducers and high-frequency ultrasound. The angle of the “far-field” convergence (θ) is directly proportional to the wavelength and inversely proportional to the diameter of the transducer and is expressed by the equation:

image

Further shaping of the beam geometry may be adjusted using acoustic lenses or the shaping of the piezoelectric crystal. Ideally, imaging should be performed within the “near-field” or focused aspect of the ultrasound beam because the ultrasound beam is most parallel with the greatest intensity and the tissue interfaces are most perpendicular to these ultrasound beams.

image

Attenuation, Reflection, and Scatter

Waves interact with the medium in which they travel and with one another. Interaction among waves is called interference. The manner in which waves interact with a medium is determined by its density and homogeneity. When a wave is propagated through an inhomogeneous medium (and all living tissue is essentially inhomogeneous), it is partly reflected, partially absorbed, and partly scattered.

Ultrasound waves are reflected when the width of the reflecting object is larger than one fourth of the ultrasound wavelength. Because the velocity of sound in soft tissue is approximately constant, shorter wavelengths are obtained by increasing the frequency of the ultrasound beam (see Eq. 1). Large objects may be visualized using low frequencies (i.e., long wavelengths), whereas smaller objects require higher frequencies (i.e., short wavelength) for visualization. In addition, the object’s ultrasonic impedance (Z) must be significantly different from the ultrasonic impedance in front of the object. The ultrasound impedance of a given medium is equal to the medium density multiplied by the ultrasound propagation velocity. Air has a low density and propagation velocity, so it has a low ultrasound impedance. Bone has a high density and propagation velocity, so it has a high ultrasound impedance. For normal incidence, the fraction of the reflected pulse compared with the incidence pulse is:

image

where Ir is intensity reflection coefficient, and Z1 and Z2 are acoustical impedance of the two media.

The greater the differences in ultrasound impedance between two objects at a given interface, the greater the ultrasound reflection. Because the ultrasound impedances of air or bone are significantly different from blood, ultrasound is strongly reflected from these interfaces, limiting the availability of ultrasound to deeper structures. Echo studies across lung or other gas-containing tissues or across bone are not feasible. Reflected echoes, also called “specular echoes,” usually are much stronger than scattered echoes. A grossly inhomogeneous medium, such as a stone in a water bucket or a cardiac valve in a blood-filled heart chamber, produces strong specular reflections at the water–stone or blood–valve interface because of the significant differences in ultrasound impedances. Furthermore, if the interface between the two objects is not perpendicular, the reflected signal may be deflected at an angle and may not return to the transducer for imaging.

In contrast, if the objects are small compared with the wavelength, the ultrasound wave will be scattered. Media that are inhomogeneous at the microscopic level, such as muscle, produce more scatter than specular reflection because the differences in adjacent ultrasound impedances are low and the objects are small. These small objects will produce echoes that reflect throughout a large range of angles with only a small percentage of the original signal reaching the ultrasound transducer. Scattered ultrasound waves will combine in constructive and destructive fashions with other scattered waves, producing an interference pattern known as “speckle.” Compared with specular echoes, the returning ultrasound signal amplitude will be lower and displayed as a darker signal. Although smaller objects can be visualized with higher frequencies, these higher frequencies result in greater signal attenuation, limiting the depth of ultrasound penetration.

Attenuation refers to the loss of ultrasound power as it transverses tissue. Tissue attenuation is dependent on ultrasound reflection, scattering, and absorption. The greater the ultrasound reflection and scattering, the less ultrasound energy is available for penetration and resolution of deeper structures; this effect is especially important during scanning with higher frequencies. In normal circumstances, however, absorption is the most significant factor in ultrasound attenuation.2 Absorption occurs as a result of the oscillation of tissue caused by the transit of the ultrasound wave. These tissue oscillations result in friction, with the conversion of ultrasound energy into heat. More specifically, the transit of an ultrasound wave through a medium causes molecular displacement. This molecular displacement requires the conversion of kinetic energy into potential energy as the molecules are compressed. At the time of maximal compression, the kinetic energy is maximized and the potential energy minimized. The movement of molecules from their compressed location to their original location requires conversion of this potential energy back into kinetic energy. In most cases, this energy conversion (either kinetic into potential energy or vice versa) is not 100% efficient and results in energy loss as heat.1

The absorption is dependent both on the material through which the ultrasound is passing and the ultrasound frequency. The degree of attenuation through a given thickness of material, x, may be described by:

image

where a is the attenuation coefficient in decibels (dB) per centimeter at 1 MHz, and freq represents the ultrasound frequency in megahertz (MHz).

Examples of attenuation coefficient values are given in Table 12-1. Whereas water, blood, and muscle have low ultrasound attenuation, air and bone have very high tissue ultrasound attenuation, limiting the ability of ultrasound to transverse these structures. Table 12-2 gives the distance in various tissues at which the intensity or amplitude of an ultrasound wave of 2 MHz is halved (the half-power distance).

TABLE 12-1 Attenuation Coefficients

Material Coefficient (dB/cm/MHz)
Water 0.002
Fat 0.66
Soft tissue 0.9
Muscle 2
Air 12
Bone 20
Lung 40

TABLE 12-2 Half-Power Distances at 2 MHz

Material Half-Power Distance (cm)
Water 380
Blood 15
Soft tissue (except muscle) 1–5
Muscle 0.6–1
Bone 0.2–0.7
Air 0.08
Lung 0.05

Imaging techniques

image

Harmonic Imaging

Harmonic frequencies are ultrasound transmission of integer multiples of the original frequency. For example, if the fundamental frequency is 4 MHz, the second harmonic is 8 MHz, the third fundamental is 12 MHz, and so on. Harmonic imaging refers to a technique of B-mode imaging in which an ultrasound signal is transmitted at a given frequency but will “listen” at one of its harmonic frequencies.3,4 As ultrasound is transmitted through a tissue, the tissue undergoes slight compressions and expansions that correspond to the ultrasound wave temporarily changing the local tissue density. Because the velocity of ultrasound transit is directly proportional to density, the peak amplitudes will travel slightly faster than the trough. This differential velocity transit of the peak with the trough wave results in distortion of the propagated sine wave, resulting in a more peaked wave. This peaked wave will contain frequencies of the fundamental frequency, as well as the harmonic frequencies (Figure 12-2). Although little distortion occurs in the near field, the amount of energy contained within these harmonics increases with ultrasound distance transversed as the ultrasound wave becomes more peaked. Eventually, the effects of attenuation will be more pronounced on these harmonic waves with subsequent decrease in harmonic amplitude. Because the effects of attenuation are greatest with high-frequency ultrasound, the second harmonic usually is used.

The use of tissue harmonic imaging is associated with improved B-mode imaging. Near-field scatter is common with fundamental imaging. Because the ultrasound wave has not yet been distorted, little harmonic energy is generated in the near field, minimizing near-field scatter when harmonic imaging is used. Because higher frequencies are used, greater resolution may be obtained. Finally, with tissue harmonic imaging, side-lobe artifacts are substantially reduced and lateral resolution is increased.

image

Doppler Techniques

Most modern echo scanners combine Doppler capabilities with their 2D imaging capabilities. After the desired view of the heart has been obtained by 2DE, the Doppler beam, represented by a cursor, is superimposed on the 2D image. The operator positions the cursor as parallel as possible to the assumed direction of blood flow and then empirically adjusts the direction of the beam to optimize the audio and visual representations of the reflected Doppler signal. Currently, Doppler technology can be utilized in at least four different ways to measure blood velocities: pulsed, high-repetition frequency, continuous wave, and color flow. Although each of these methods has specific applications, they are seldom available concurrently.

The Doppler Effect

Information on blood flow dynamics can be obtained by applying Doppler frequency shift analysis to echoes reflected by the moving red blood cells.5,6 Blood flow velocity, direction, and acceleration can be instantaneously determined. This information is different from that obtained in 2D imaging, and hence complements it.

The Doppler principle as applied in echocardiography states that the frequency of ultrasound reflected by a moving target (red blood cells) will be different from the frequency of the reflected ultrasound. The magnitude and direction of the frequency shift are related to the velocity and direction of the moving target. The velocity of the target is calculated with the Doppler equation:

image

where v = the target velocity (blood flow velocity); c = the speed of sound in tissue; fd = the frequency shift; f0 = the frequency of the emitted ultrasound; and θ = the angle between the ultrasound beam and the direction of the target velocity (blood flow). Rearranging the terms,

image

As is evident in Equation 8, the greater the velocity of the object of interest, the greater the Doppler frequency shift. In addition, the magnitude of the frequency shift is directly proportional to the initial emitted frequency (Figure 12-3). Low emitted frequencies produce low Doppler frequency shifts, whereas higher emitted frequencies produce high Doppler frequency shifts. This phenomenon becomes important with aliasing, as is discussed later in this chapter. Furthermore, the only ambiguity in Equation 7 is that theoretically the direction of the ultrasonic signal could refer to either the transmitted or the received beam; however, by convention, Doppler displays are made with reference to the received beam; thus, if the blood flow and the reflected beam travel in the same direction, the angle of incidence is zero degrees and the cosine is +1. As a result, the frequency of the reflected signal will be higher than the frequency of the emitted signal.

Equipment currently used in clinical practice displays Doppler blood-flow velocities as waveforms. The waveforms consist of a spectral analysis of velocities on the ordinate and time on the abscissa. By convention, blood flow toward the transducer is represented above the baseline. If the blood flows away from the transducer, the angle of incidence will be 180 degrees, the cosine will equal -1, and the waveform will be displayed below the baseline. When the blood flow is perpendicular to the ultrasonic beam, the angle of incidence will be 90 or 270 degrees, the cosine of either angle will be zero, and no blood flow will be detected. Because the cosine of the angle of incidence is a variable in the Doppler equation, blood-flow velocity is measured most accurately when the ultrasound beam is parallel or antiparallel to the direction of blood flow. In clinical practice, a deviation from parallel of up to 20 degrees can be tolerated because this results in an error of only 6% or less.

Pulsed-Wave Doppler

In PW Doppler, blood-flow parameters can be determined at precise locations within the heart by emitting repetitive short bursts of ultrasound at a specific frequency (PRF) and analyzing the frequency shift of the reflected echoes at an identical sampling frequency (fs). A time delay between the emission of the ultrasound signal burst and the sampling of the reflected signal determines the depth at which the velocities are sampled; the delay is proportional to the distance between the transducer and the location of the velocity measurements. To sample at a given depth (D), you must allow sufficient time for the signal to travel a distance of 2 × D (from the transducer to the sample volume and back). The time delay, Td, between the emission of the signal and the reception of the reflected signal, is related to D, and to the speed of sound in tissues (c), by the following formula:

image

The operator varies the depth of sampling by varying the time delay between the emission of the ultrasonic signal and the sampling of the reflected wave. In practice, the sampling location or sample volume is represented by a small marker, which can be positioned at any point along the Doppler beam by moving it up or down the Doppler cursor. On some devices, it is also possible to vary the width and height of the sample volume.

The trade-off for the ability to measure flow at precise locations is that ambiguous information is obtained when flow velocity is very high. Information theory suggests that an unknown periodic signal must be sampled at least twice per cycle to determine even rudimentary information such as the fundamental frequency; therefore, the rate of PRF of PW Doppler must be at least twice the Doppler-shift frequency produced by flow.7 If not, the frequency shift is said to be “undersampled.” In other words, this frequency shift is sampled so infrequently that the frequency reported by the instrument is erroneously low.1

A simple reference to Western movies will clearly illustrate this point. When a stagecoach gets under way, its wheel spokes are observed as rotating in the correct direction. As soon as a certain speed is attained, rotation in the reverse direction is noted because the camera frame rate is too slow to correctly observe the motion of the wheel spokes. In PW Doppler, the ambiguity exists because the measured Doppler frequency shift (fD) and the sampling frequency (fs) are in the same frequency (kHz) range. Ambiguity will be avoided only if the fD is less than half the sampling frequency:

image

The expression fs/2 is also known as the Nyquist limit. Doppler shifts above the Nyquist limit will create artifacts described as “aliasing” or “wraparound,” and blood-flow velocities will appear in a direction opposite to the conventional one (Figure 12-4). Blood flowing with high velocity toward the transducer will result in a display of velocities above and below the baseline. The maximum velocity that can be detected without aliasing is dictated by:

image

where Vm = the maximal velocity that can be unambiguously measured; c = the speed of sound in tissue; R = the range or distance from the transducer at which the measurement is to be made; and f0 = the frequency of emitted ultrasound.

Based on Equation 11, this “aliasing” artifact can be avoided by either minimizing R or f0. Decreasing the depth of the sample volume in essence increases fs. This higher sampling frequency allows for the more accurate determination of higher Doppler shifts frequencies (i.e., higher velocities). Furthermore, because fo is directly related to fd (see Eq. 7), a lower emitted ultrasound frequency will produce a lower Doppler frequency shift for a given velocity (see Figure 12-3). This lower Doppler frequency shift will allow for a higher velocity measurement before aliasing occurs.

Continuous-Wave Doppler

The CW Doppler technique uses continuous, rather than discrete, pulses of ultrasound waves. Ultrasound waves are continuously being both transmitted and received by separate transducers. As a result, the region in which flow dynamics are measured cannot be precisely localized. Because of the large range of depths being simultaneously insonated, a large range of frequencies is returned to the transducer. This large frequency range corresponds to a large range of blood-flow velocities. This large velocity range is known as “spectral broadening.” Spectral broadening during CW Doppler interrogation contrasts the homogenous envelope that is obtained with PW Doppler (Figure 12-5). Blood-flow velocity is, however, measured with great accuracy even at high flows because sampling frequency is very high. CW Doppler is particularly useful for the evaluation of patients with valvular lesions or congenital heart disease (CHD), in whom anticipated high-pressure/high-velocity signals are anticipated. It also is the preferred technique when attempting to derive hemodynamic information from Doppler signals (Box 12-2).

Color-Flow Doppler

Advances in electronics and computer technology have allowed the development of CFD ultrasound scanners capable of displaying real–time blood flow within the heart as colors while also showing 2D images in black and white. In addition to showing the location, direction, and velocity of cardiac blood flow, the images produced by these devices allow estimation of flow acceleration and differentiation of laminar and turbulent blood flow. CFD echocardiography is based on the principle of multigated PW Doppler in which blood-flow velocities are sampled at many locations along many lines covering the entire imaging sector.8 At the same time, the sector also is scanned to generate a 2D image.

A location in the heart where the scanner has detected flow toward the transducer (the top of the image sector) is assigned the color red. Flow away from the direction of the top is assigned the color blue. This color assignment is arbitrary and determined by the equipment’s manufacturer and the user’s color mapping. In the most common color–flow coding scheme, the faster the velocity (up to a limit), the more intense the color. Flow velocities that change by more than a preset value within a brief time interval (flow variance) have an additional hue added to either the red or the blue. Both rapidly accelerating laminar flow (change in flow speed) and turbulent flow (change in flow direction) satisfy the criteria for rapid changes in velocity. In summary, the brightness of the red or blue colors at any location and time is usually proportional to the corresponding flow velocity, whereas the hue is proportional to the temporal rate of change of the velocity.

image

Three-Dimensional Reconstruction

Echocardiography has become a vital tool in the practice of contemporary cardiac anesthesiology. As with any technology, a considerable evolution has occurred since it was first introduced into the operating rooms in the early 1980s. Among the most important advances has been the progression from one-dimensional (1D; e.g., M-mode) imaging to 2D imaging, as well as spectral Doppler and real-time color-flow mapping superimposed over a 2D image. The heart, however, remains a 3D organ. Although multiplane 2D images can be acquired easily with modern TEE probes by simply rotating the image plane electronically from 0 to 180 degrees, the final process occurs by the echocardiographer stitching the different 2D planes together and creating a “mental” 3D image. Transmitting this “mental” image to other members of the surgical team can sometimes be quite challenging. By directly displaying a 3D image onto the monitor, cardiac anatomy and function could be assessed more rapidly and communication between the echocardiographer and the cardiac surgeon facilitated before, during, and immediately after surgery.9

Historic Overview

Early concepts of 3D echocardiography (3DE) found their roots in the 1970s.10 Because of the limitations of hardware and software capabilities in that era, the acquisition times required to create a 3D image prohibited widespread clinical acceptance, limiting its use for research purposes only. Technologic advances in the 1990s enabled 3D reconstruction from multiple 2D images obtained from different imaging planes. By capturing an image every 2 to 3 degrees as the probe rotated 180 degrees around a specific region of interest (ROI), high-powered computers were able to produce a 3D image, which could be refined further with postprocessing software. These multigated image planes must be acquired under electrocardiographic and respiratory gating to overcome motion artifact. The limitations of this technology are the time required to process and optimize the 3D image and the inability to obtain instantaneous, real-time imaging of the heart.

In 2007, a real-time 3D TEE probe with a matrix array of piezo-electric crystals within the transducer head was released on the market. This 3D imaging matrix array, as opposed to conventional 2D imaging transducers, not only has columns in a single 1D plane but also rows of elements. That is, instead of having a single column of 128 elements, the matrix array comprises more than 50 rows and 50 columns of elements (Figure 12-6). Although this “matrix” technology was available for transthoracic (precordial) scanning, a breakthrough in engineering design was required before the technology could be transitioned into the limited space of the head of a TEE probe.

Display of Three-Dimensional Images

Buy Membership for Anesthesiology Category to continue reading. Learn more here