Physics and Instrumentation of Cardiac Positron Emission Tomography/Computed Tomography

Published on 26/02/2015 by admin

Filed under Cardiovascular

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1794 times

CHAPTER 23 Physics and Instrumentation of Cardiac Positron Emission Tomography/Computed Tomography

This chapter concerns the physics of cardiac PET/CT and the difference between scanning the heart and scanning other parts of the body. Because of the desire to image a beating organ and because stress can cause the diaphragm and heart to move within the thoracic cavity, pitfalls can be part of cardiac imaging that are less important for whole-body imaging. These include obtaining a proper attenuation correction, imaging at the correct time relative to injection, and determining whether there has been motion of the patient during the study. Before these issues can be addressed, some background into the data acquisition is necessary.

SCANNER PHYSICS

Positron emission tomography (PET) takes advantage of the unique characteristics of positron decay. A proton-rich nucleus, such as 82Rb or 18F, can eliminate its excess charge by emitting a positron, which is the antiparticle of the electron. The positron will scatter around in the body (within a millimeter or so of where the decay took place) until it meets an electron, and then they are annihilated. The annihilation converts the mass of the positron and electron into energy, in this case two 511-keV photons that travel in nearly opposite directions (Fig. 23-1). If both of these photons can be detected, then it is known that there is activity somewhere along the line between the two responding detectors. After enough of these events have been recorded, the information can be combined to form an image.

At the heart of a PET camera are scintillation detectors. A scintillation detector is a crystal that gives off many low-energy photons when a high-energy photon interacts with its molecules. The low-energy photons are collected by photomultiplier tubes, which convert them into an electronic signal. The precise time of arrival of the event and the energy of the event are recorded. This is diagrammed in Figure 23-2. The timing is critical because it is used to decide if two photons came from the same annihilation. If two photons are detected within a very short time (called the time window, typically 3 to 12 ns, depending on the type of detector used in the scanner), it is assumed that they were created from a single positron-electron annihilation that occurred somewhere on the line that connects the two recording detectors. This is called a coincident event, and the line is termed a line of response. If the time between detecting two photons is greater than the time window, the two detected events must have originated from two separate annihilations because light travels at approximately 0.3 m/ns and the scanner is only about 1 m in diameter. The primary data set from the scanner is the number of coincident events that are recorded for each line of response.

Recording the energy of the event is important to determine first if the photon came from a positron annihilation and then if it scattered off tissue in the body on its way to the detector. If the photon arrives with 511 keV of energy, it is overwhelmingly likely that it originated in a positron-electron annihilation. This is useful for preventing background events of different energy from entering the data stream. The detectors are not perfect, and there are some physical effects that cause the energy to vary, so scanner electronics are generally set to accept events with a range of energies, typically 430 to 650 keV. One of the physical effects is scatter. When a photon scatters, it loses some energy to the scatterer. Hence, with better energy resolution, the number of contaminating background and scattered events accepted into the primary data set can be reduced.

A PET camera is made by arranging detectors in a cylindrical geometry as depicted in Figure 23-3. All detectors are continually monitoring for photons. The main advantage of PET imaging over SPECT is the vastly increased count rate capability. All detectors in the PET ring are continuously monitoring for events versus a gamma camera, which uses a lead collimator to detect only events along certain projections at any given time.

There are several types of events that can be recorded in a PET scanner. The example shown in Figure 23-1 is called a true event. “True” comes from a positron-electron annihilation generating two photons that travel in opposite directions and are both recorded. This is the raw data that we desire to accurately reconstruct an image. Unfortunately, collecting data as described also results in recording of other types of events. It is possible for photons from two different positron annihilations that by random chance happen to decay within a few nanoseconds of each other to be detected in separate detectors within the time window. This situation is depicted in Figure 23-3. When this happens, there is the potential for incorrectly assuming that radioactivity is present between the two responding detectors. This type of event is called a random event because the two detectors that are involved and the time between detection of the two photons are both completely random. Random events add a uniform background to the primary data set.

The randomness in time between the two detections can be exploited to estimate the number of random events that are confounding the primary data set. The number of random events is estimated by one of two methods. The first method is the “delayed window” method. In this technique, a second data set is simultaneously acquired that includes only random events.1 The second method probabilistically calculates the number of expected random events based on the count rates in each of the detectors. In either case, the estimate is subtracted from the primary data set to produce a random corrected data set.

A multiple event is a combination of a true event and a random event as shown in Figure 23-4. By random chance, a third photon falls within the time window of a true event. When this occurs, it is unknown which pair of detectors represents the true event and which pair represents the random event. In all cases, one of the potential lines of response can be eliminated because it does not go through the imaged field of view. This leaves two lines of response, one true and one random. Both of the events are recorded. On average, the random events are corrected in the process described earlier. Recording both events and later correcting for the random yields additional information that can be used to reconstruct the image. This is as opposed to the archaic practice of ignoring multiple events because it cannot be known which two detectors recorded the true event.

image

image FIGURE 23-4 Two other types of events that can occur during a PET acquisition. The straight line represents a true event as depicted in Figure 23-1. The line with the x indicates an attenuation event. One of the photons heading toward the detectors is attenuated by the body, so a coincidence event could not be recorded. This leads to an underestimate of the amount of activity in the body. This is fixed by the attenuation correction derived from the collected CT scan. If these two events happen to occur at the same time, the event is termed a multiple. In this case, it is not clear from the collected data if there is activity along the dotted or solid line. The data set is corrected for multiple events during the correction for random events.

Photon attenuation by the patient’s body leads to potential underestimation of the amount of activity in the patient. This is also depicted in Figure 23-4. Keep in mind that unless both photons are detected, no event can be recorded. It is the total amount of tissue that both photons must traverse that determines the probability of attenuation. Note that the probability of an event’s being attenuated is greater if the line of response traverses the center of the patient. On the other hand, some of the events originating at the edge of the body can reach the detectors after traversing only a small amount of tissue. Because of this, more events that originate at the center of the body are attenuated compared with the edge of the body. If this is not taken into account when images are reconstructed, the center parts of the image will be depressed (Fig. 23-5).

A scatter event is when one of the photons scatters in the patient so that the line of response between the two responding detectors does not include the location of the event (Fig. 23-6). Essentially all scattering of importance to PET is photons scattering off free electrons, called Compton scattering. When a photon scatters, it transfers some of its energy to the electron. The greater the scattering angle, the greater the energy transfer. A 511-keV photon that scatters by 30 degrees is reduced in energy to 450 keV, approximately the lower level threshold used in setting up PET detectors. Hence, if one or both of the annihilation photons scatter by less than 30 degrees, they can be recorded by the PET system. With this much scattering, a recorded event with a line of response that passes near the center of the scanner could be off by up to 10 cm. Scattering by small angles is more probable than by larger angles, so scatter affects PET images by reducing resolution and contrast.

The final type of event that needs to be considered for cardiac imaging is called a prompt gamma and is shown in Figure 23-7. This is a property of 82Rb decay that is not present with 18F. When a rubidium nucleus decays, it converts to a krypton nucleus by giving off its charge in the form of a positron. A significant fraction of the time, the krypton is in an excited state and almost immediately gives off another gamma ray. This is a situation that looks very much like the multiple event depicted in Figure 23-4, but there is a significant difference. The prompt gamma event is not random. The annihilation photons and the prompt gamma all are generated from a single decay process. In this case, the true and random events are correlated in time, that is, they happen at the same time. (Actually, the prompt gamma can be delayed by a few picoseconds, but this extremely small time is insignificant compared with the duration of the time window.) Compared with a multiple event, the random is equally likely to occur at any time.

image

image FIGURE 23-7 Prompt gamma event. This type of event can occur in scanning with rubidium (Rb). Rubidium decays to an excited state of krypton (Kr), which gives off its excess energy in the form of a photon. If all three of these photons are detected, the event has the appearance of a multiple event (see Fig. 23-4). The difference is that one event precipitated all three photons as opposed to the two decays that just happened to occur at the same time in Figure 23-4. Hence, there is no randomness in the timing between these photons, so a conventional correction for random events will not remove them from the data set. A separate prompt gamma contamination estimate must be performed to account for these events before an image is reconstructed.

PET Detectors

All of the different types of events need to be accounted for in image reconstruction. The first defense against the confounding types of events is the detector itself. There are several different types of detector materials (Table 23-1). The three main characteristics for the detector material in PET are the stopping power, energy resolution, and time resolution. The greater the stopping power, the less detector material is needed to stop one of the annihilation photons. This is important for economic and image quality reasons. The detector material is the dominating cost of a scanner, so it pays to have higher stopping power material. It can also lead to better images. If a photon is detected in a smaller detector, the line of response is better defined, which leads to better images. Finally, for detectors of equal size, the one with the higher stopping power is more likely to record the event, leading to a higher sensitivity scanner and, again, better images.

Energy resolution is important for determining whether an event is scattered or not as discussed earlier. Because no detector is perfect, some number of scattered photons will always be recorded. However, with better energy resolution, the lower level energy threshold can be increased, which reduces the maximal angle through which a photon can be scattered and still be recorded. There is a subtle point worth mentioning here. As more scattered photons are rejected, the number of recorded events decreases. Because almost all photons are scattered to some degree, a perfect scatter rejection would result in very few events being recorded, which would result in very poor images. The scattered events do carry information as to the location of the source of activity. Hence, it is beneficial always to accept some level of scatter. As computer power and the scatter estimation and image reconstruction routines improve, more scatter events should be included in the data set.

Finally, the time resolution of the detector is important because it affects the number of random events that are recorded. The better the time resolution, the smaller the time window, and the fewer random events will enter the data set. Random events truly are random and therefore are equally likely to occur at any time. Therefore, a detector that permits a time window half as large will result in a data set that has half the number of random events.

The choice of detector materials involves a tradeoff. Generally speaking, more detected events leads to better images. The number of detected events is greatly increased if the scanner is operated in three-dimensional mode. Unfortunately, the number of confounding events (randoms and scatter) is increased as well as the number of good events (true). At some point, the detriment of dealing with the bad events outweighs the benefit of collecting more good events. This tradeoff will be discussed later after a short discussion of three-dimensional imaging, which greatly increases the counts in PET imaging.

In many of the preceding diagrams, it appears as if the PET scanner is a ring of detectors within a single plane. Historically, this was the case. A volumetric PET scanner was built by stacking a set of independent detector rings to make a scanner. At some point, it was realized that if the detected events are limited to those occurring only within a plane, the number of detected events will be many times less than it could be. On the other hand, it takes much more computer memory and processing power to reconstruct images when nonplanar events are also included. The computing threshold was passed around 1999, and in modern scanners, the acceptance of events is opened up so that any pair of detectors can record an event. This is depicted in Figure 23-3.

Opening up the scanner has advantages and disadvantages. The advantage is collecting many more counts. When you collect more counts, you have better statistics, and the images look much better. On the other hand, many more random, scatter, and prompt gamma events are also recorded. Consider a scatter event. It might stay within a plane of detectors or it might scatter outside the plane. Because the plane is very thin, the overwhelming probability is that it will scatter out of the plane. If you have only a two-dimensional scanner, these scatter events will not become part of the data. However, if the scanner is operating in three-dimensional mode, scatter into neighboring planes will be recorded. Because of this, the fraction of recorded events that are scattered in a three-dimensional scanner can approach 50%. Hence, a robust and accurate scatter correction must be performed. The same consideration can be applied to the detection of random events. If there is a possibility of detecting events between any pair of detectors anywhere within the scanner, it is much more likely that a random event will be recorded. In many imaging situations, when the time window can be reduced to approximately 6 ns or less, the advantage of collecting more events outweighs the disadvantages. For this reason, scanners that are made with the faster, better time resolution detectors are those that principally operate in three dimensions. The quantitative technique for calculating how these unwanted events affect image quality is called noise equivalent counting.

Conversion of CT Images to PET Attenuation Maps

Photon transmission through a dense body can be expressed in terms of the linear attenuation coefficient, µ [1/cm]; µ is a function of the photon energy and electron density of the material traversed. Photon attenuation at CT diagnostic energies is dominated by the photoelectric effect and Compton scattering. Measured linear attenuation coefficients in CT imaging are determined with a continuous photon spectrum composed of bremsstrahlung and characteristic x-rays ranging from approximately 10 keV to the peak x-ray tube potential. Reconstructed CT attenuation values are expressed relative to water and termed Hounsfield units [HU = 1000 (µ − µwater)/µwater]. PET imaging occurs at 511 keV, where photon attenuation is dominated by Compton scattering. Therefore, CT data cannot be used directly to correct for attenuation of PET emission data. Instead, CT data are converted to 511-keV linear attenuation coefficients by segmentation or direct scaling.

Segmentation takes advantage of there being only a few primary tissue types in the field of view, such as bone, tissue, and lung. CT numbers (Hounsfield units) that fall within one of these groups are replaced with the appropriate linear attenuation coefficient at 511 keV. The advantage of this technique is that it reduces variation and noise in the image. The disadvantage is that it does not permit interindividual or intraindividual variation in the coefficients. It also forces all tissues into one of the segmented types. This method is now rarely used.

Direct scaling assumes a linear relationship between CT and PET attenuation. This is a good assumption in low-density tissues such as lung and soft tissue. Bone is an exception because its CT attenuation is dominated primarily by photoelectric contributions. Therefore, the linear relationship depends on the effective energy of the CT scan. So, an appropriate calibration curve needs to be a combination of two or more linear curves that cover the range of attenuation commonly found in the body. A bilinear relationship is a common conversion technique used in PET/CT scanners (Fig. 23-8).

CT data are collected at higher resolution (typically 1- × 1- × 1-mm voxels) than are PET data (typically 6- × 6- × 6-mm voxels), requiring the converted CT image to be down-sampled to the PET image matrix size and smoothed with an appropriate kernel to match the PET resolution (Fig. 23-9). The attenuation map data are then used to correct the emission data.

image

image FIGURE 23-9 CT image collected at 120 kVp (A), converted to 511 keV linear attenuation coefficients using the bilinear curve in Figure 23-8 (B), and down-sampled to the PET matrix size and smoothed with a 6-mm gaussian filter (C). The image in C is used to correct PET data for attenuation.

Attenuation Mismatch

Artifacts can arise from improper registration between the transmission and emission scans. The cause of such artifacts can be placed in three primary groups: (1) motion of the patient, such as large rigid body movements, which may occur during or between scans; (2) breathing motion, resulting from the mismatch in temporal resolution between the CT attenuation correction (acquired in <1 breath cycle) and the emission (acquired over many breath cycles) scans; and (3) drift of the contents of thoracic cavity drift, such as that induced by the administration of pharmacologic stressing agents and other factors leading to a shift in the heart’s position within the thoracic cavity. Each of these sources is discussed.

Drift of Thoracic Contents

Drift of the thoracic contents, such as slow continuous movement of the heart, occurs in response to changes in the patient’s state, such as changes in lung volume as a result of the introduction of a pharmacologic stressing agent.2 Misregistration caused by this mechanism is commonly observed as cardiac uptake overlying the CT lung field (Fig. 23-13). Furthermore, even in the presence of good respiratory averaged transmission data, drift of thoracic contents is still prevalent and is a main factor along with motion of the patient leading to registration errors in approximately one quarter of clinical perfusion studies. Given the nature of its mechanism, motion of the patient often cannot be accounted for by altering the transmission protocol and therefore requires that a post-reconstruction image registration method be available.

Attenuation Correction Protocols

Correction schemes to address the motion problem in the thorax have concentrated on gating techniques in the PET/CT acquisition and blurring or averaging of the transmission data.4 The first method uses either prospective (sinogram mode) or retrospective (list mode) gating of the respiratory and cardiac cycles. The respiratory cycle is normally monitored by use of a bellows, chest band, or infrared tracking system, of which the sinusoidal phase is then divided into a predetermined number of bins, commonly 10. Monitoring of the cardiac cycle is performed with an electrocardiograph; the phase is similarly divided into a preset number of bins between successive R–R waves, commonly eight bins. The data can then be binned into a two-dimensional histogram and reconstructed into separate image volumes of any cardiac-respiratory phase combination that matches the CT phase collected. The disadvantage of this technique is that the collected prompt events are distributed into many separate images (~80), and reconstruction of a single image results in poor quality because of the low number of counts. Therefore, multiple gates are often added together to improve image quality at the sacrifice of motion-free image information. These multiple gating techniques are achievable on PET/CT systems, but there are limited software resources capable of efficiently processing these events.

A second approach to matching PET emission and CT transmission data is blurring or averaging of the CT data to match the averaged nature of the PET study. This approach is referred to as time-averaged CT and has been explored more extensively because the protocols employed are used routinely in stand-alone cardiac CT units. Time-averaged CT protocols have been proposed in place of a breath-hold because they permit the patient to be scanned under free-breathing conditions (see Fig. 23-12B&C). The motivation is that the free-breathing state provides a more accurate representation of attenuating structures present in the emission examination. One method for obtaining a time-averaged CT scan is use of a low-pitch helical protocol whereby data are collected at a pitch of 0.5 or lower. This approach increases the axial sampling, which suppresses motion artifacts and results in blurring when linear interpolation algorithms are used in the reconstruction. In this case, the cardiac and respiratory phases are spread along the axial direction (see Fig. 23-12B).5 A second method is collecting an average CT by successive cine mode acquisitions (also referred to as sequential), whereby multiple images are collected over one or more breath cycles at a single bed position. The table is then stepped to the next position, and acquisition resumes. This sequence is repeated until the entire chest cavity is covered. The multiple image data are then averaged at each bed position and interpolated to the PET slice thickness for attenuation correction (see Fig. 23-12C).6

PET and CT Coregistration

Drift of the thoracic contents and motion of the patient cannot be fully corrected by altering the CTAC protocol and thus require a post-imaging correction method to align the transmission and emission data. Registration packages are becoming standard on PET/CT cardiac systems and employ a six-parameter (three translations, three rotations) rigid-body transformation of the CTAC image to match the emission data. The process of coregistration has not yet been automated; therefore, it remains a source of variability introduced by individual user preferences in subjectively assessing alignment quality.

It is recommended that the emission study be reconstructed without corrections to reduce the likelihood of mistaking a low-count region for an anatomic edge. Variability in the coregistration can further be reduced by implementing a quality control procedure. A straightforward method is to count the number of myocardial uptake pixels that overlie the left lung of the CTAC image. Then, as the CT myocardium is altered to match the myocardium uptake in the emission images, the number of emission myocardial pixels in the CT left lung field can be monitored (see Fig. 23-13).7

A disadvantage of the rigid-body transformation method is the concomitant relocation of highly attenuating structures. In the case of drift of the thoracic contents, bony structures such as the spine are repositioned to an area that does not correspond to spine in the emission study. This leads to streaking artifacts discussed in the section on quality control. Therefore, an alternative to the rigid-body registration method is needed. One proposed technique is an emission-driven approach wherein pixels in the left CT lung field that overlap with myocardial uptake in the emission image are reassigned values corresponding to soft tissue.3

Scatter Correction

The presence of scatter in the data set leads to images with reduced resolution and contrast. Scattered events add to the background, but because small-angle scattering is more probable, scatter adds more events to the lines of response that pass through the central areas of the body. For example, an image that contains scatter would have greater apparent activity at the center. On the other hand, if the scatter correction is overzealous, there will be a decrease in apparent activity near the center. This situation happens if prompt gammas are not properly handled as described later. The fraction of scattered events in three-dimensional scanners can approach 50%. Hence, scatter correction is a substantial modification to the primary data set. If an image contains photopenic regions that do not correspond to expected anatomy, it is wise to investigate the scatter correction. A comparison of images with and without the correction is valuable for determining if there is a problem in the original reconstruction.

Scatter correction is a difficult problem because to calculate the amount of scatter, you must know the location of both the radioactivity and the scattering material. The distribution of scatterers is derived from the CT scan, but the location of the radioactivity is unknown. After all, it is the purpose for performing the scan. Hence, scatter correction is necessarily an iterative procedure.

Photons can scatter off any electron in the body, but considering every possibility is unwieldy. The calculation becomes much easier if all of the scattering is considered to occur from a single scattering source. The most common simplification is to assume that the scatter always takes place at the midpoint of the particular line of response8; with this assumption, it is straightforward to calculate the number of scatter events that would be detected in neighboring lines of response, and an iterative process can be used to reduce the effect of scatter in the final images.

There is a fundamental difficulty with this algorithm for scatter correction. Some of the scattering takes place outside the field of view, where there is no information about the scatter sources. Hence, the calculation, even in principle, cannot be done without simplifying assumptions. Scatter events that originate outside of the field of view tend to contribute a uniform or very slowly varying background throughout the field of view. The following algorithm is often used to account for scatter from unknown sources. The scatter calculation is performed as described before for the known scatterers. This sets the shape of the scatter background. The calculated distribution is then linearly scaled so that it matches the number of events detected outside of the patient. This assumes that after random events correction, the only events outside of the patient are from scatter. The critical point here is that practical scatter correction is a two-step process: the first step estimates the shape of the scatter distribution, and the second scales it to match the data outside the patient.

A difficulty is encountered when scanning 82Rb. Recall that 82Rb often emits a prompt gamma that looks like a random event when it is detected. Further, the standard randoms correction using the delayed window will not correct for this type of event because it never occurs in the delayed window. This is important in thinking about the scatter correction. With other PET nuclides, after the randoms correction, the only type of event that is seen outside the body is scatter, which is why the scaling step is valid. However, in the 82Rb case, there are also apparent random events outside the body. Therefore, if the scatter distribution is scaled to match the background, it will be made artificially high. This leads to overestimation of the amount of scatter in the body. Subtracting this supposed large contribution leaves behind photopenic areas. Often, these photopenic areas overlap the heart and could be read as a defect, leading to a false-positive study result. See the later section on quality control for figures showing this effect.

Scatter correction remains an area of active investigation. As computer power continues to increase, more and more sophisticated algorithms can be used to estimate and to correct for scatter. On first consideration, most people consider scattered photons to be contaminants that should be discarded from the data set. However, the scattered photons carry information about the location of activity. This is because scatter by small angles is more likely than by large angles. So, the line of response from a scattered event is likely to pass near the actual source of activity. An algorithm that makes use of this fact could potentially produce better images. Currently, state-of-the-art reconstructions model the scatter and subtract it from the primary data set. Rather than discarding the information in these events, it is likely that reconstruction algorithms in the future will take advantage of the information to produce images with better resolution and contrast.

Image Reconstruction

There are two broad approaches to reconstruction of images from PET data. One is a direct calculation of the image from the data, and the other is an iterative approach that calculates successive approximations that lead to the final image. The direct approach is called filtered backprojection (FBP). Historically, FBP has been used because it requires fewer computing resources and the algorithm is fast. On the other hand, several assumptions must be made about the data for the calculation to be performed. The implications of these are discussed later. The most common iterative approach is called ordered subsets expectation maximization or OSEM. Iterative methods have the advantage of being able to more appropriately incorporate the physics of the decay and detecting device. There are two main disadvantages: it takes longer, and because it is a series of successive approximations, each getting closer to the “true” image, it is very difficult to know when to stop the iterations.

FBP requires that the recorded data will be equivalent (except for a simple shift in position) no matter where the activity is placed in the scanner. In general, this is not the case. Photons from the center of the field of view enter the detectors perpendicular to the front surface. On the other hand, photons from the edge can impinge on the detectors with a high angle of incidence and so have high probability of passing through one detector and interacting with its neighbor. (This is generally called the depth of interaction problem in PET.) More detectors are involved when activity is at the edge of the field of view, so the assumption required by FBP is not fully justified. FBP has other assumptions that are not fully justified. FBP is built on equations derived mathematically after assuming a perfect set of data. It does not consider the statistical nature of radioactive decay. The actual number of detected events varies from scan to scan because of inherent randomness. Also implied in assuming a perfect data set is that it consists of only true events. This means that randoms, scatter, and attenuation must all be corrected (perfectly) before reconstruction. Finally, the perfect data set implies that there are no gaps in the data. All practical scanners have gaps between separate detectors. Often, there are gaps between individual detectors that are assembled into detector modules. These modules are assembled into the scanner ring, leaving different-sized gaps at their edges. The greater these assumptions are violated, the greater will be the negative impact on the final image.

The most visible breakdown of the assumptions in FBP is streaks in the images. These are primarily caused because the data are not perfect. Some lines of response will have an excess number of recorded events, and some neighbors will have too few just by the random nature of radioactive decay. Because of this, certain lines of response will project too much activity and other lines too little. There are two situations in which this is particularly apparent. Outside the image, the process can produce both positive and negative streaks. Most often, negative pixel values are simply ignored. The other situation is in the vicinity of very hot structures that are next to somewhat cool structures. The bladder is such a case in normal fluorodeoxyglucose (FDG) imaging. The lines of response that pass through the bladder generally have many times more recorded events than do neighboring lines of response. Any error in the detected number of events in the large lines of response will have a large effect on the neighboring lines. Because of the random nature of the decay process, the calculated number of events to project will never be exactly correct. In this situation, the errors appear as streaks and are readily apparent in nearby structures that contain little radioactivity.

Iterative reconstruction methods do not have the FBP difficulties resulting from violated assumptions necessary for the mathematics. However, they do have some drawbacks of their own that will be discussed after an introduction to the technique is presented. The start of an iterative routine is a guess of the image. Assuming that the image represents the activity distribution in the field of view, the scanning process is simulated to produce a simulated data set. The simulated data set is compared with the actual data set, and differences are noted. Where the simulated data are too large, the corresponding pixels in the guess image are reduced, and vice versa, to produce the next estimate of the true image (Fig. 23-14). The process is repeated many times until the simulated and measured data sets are the same or as close as possible. In the literature, there is considerable discussion about how many iterations should be performed. Because the statistics of radioactive decay are well known to be Poisson distributions, the simulated data set can be compared with the actual data set in a probabilistic sense. Hence, the probability that that particular data set could have arisen from the guess image distribution is calculated. Iterations can be continued as long as this probability continues to increase. After each of the first few iterations, the probability increases rapidly; but after many iterations, the incremental increase is much less. Hence, from a practical point of view, images are generally reconstructed with a fixed number of iterations.

The advantage of the iterative reconstruction procedure is that the physics of scanning is more realistically built into the algorithm. This includes the scattering process, any gaps in detector spacing within the scanner, variation in resolution across the field of view, and the noise inherent with radioactive decay. These processes are all part of the “simulate scanning” box in Figure 23-14. The disadvantage is the time required to reconstruct the image. As a matter of practical implementation, the physics built into the reconstruction algorithm is not the best available. As discussed before, the scatter correction estimation involves assuming that scattering takes place at the center of the scanner. Another problem area is the geometry of the lines of response. Photons in different lines of response interact differently with the detectors, depending on whether the line of response is normally incident or incident at an angle with respect to the detectors. To make the calculation faster, it is usually assumed that all lines of response have the same interaction with the detector. These are examples of problems with the practical implementation. In principle, the iterative reconstruction can be much more accurate. As computing power has increased over the years, the physics built into the reconstruction algorithm has become more sophisticated. For example, including the difference in detection response as a function of the geometry of the detectors and line of response is becoming more common.

As computing power continues to improve, the algorithms will incorporate even more of the physics of the scanning process, and there is little doubt that iterative reconstruction will dominate all image reconstruction. Currently, iterative reconstruction is superior in low-count studies. In these studies, the noise is proportionately higher and the perfect data assumption of FBP is more violated. Iterative reconstruction with its proper handling of the noise in radioactive decay is superior in these imaging situations. This is why nearly all whole-body FDG imaging is reconstructed with the iterative routine. Cardiac imaging is generally not in this regimen; the count rate is much higher. For this reason, the reconstruction method used at different centers is divided between FBP and iterative reconstruction. When there are sufficient counts, as there is in both 82Rb and FDG cardiac imaging, the assumption of perfect data is much more reliable for the FBP reconstruction algorithm. When this is the case, it may be that the indecision in knowing how many iterations to perform is a dominant factor in choosing the algorithm. From a practical point of view, it is very difficult to determine any difference between the two reconstruction methods for cardiac imaging. Because of this, almost all centers decide on the method they use for historical reasons or some other preconceived bias. There is little question that in the future, iterative reconstruction will be the method of choice.

TRACER KINETICS

In this section, the properties of rubidium for measuring flow and of FDG for measuring glucose metabolism are discussed. Both of these tracers have unique properties that influence how they interact with the tissue and permit specific physiologic parameters to be determined.

Rubidium 82

Rubidium is a microsphere-like tracer. Microspheres are radiolabeled molecules or physical spheres that will flow through arteries but are trapped in capillaries. In a microsphere experiment, the microspheres are injected into the left atrium or ventricle, and they are distributed throughout the body by the circulatory system. Areas that get more blood flow get more microspheres. Because they are trapped in the capillaries, subsequent determination of the number of microspheres in a volume of tissue is a measure of the amount of blood that flows into that tissue. Hence, if one could measure the number of microspheres per volume and make a relative image, it would depict a distribution proportional to the amount of blood flowing into the tissue at each pixel location.

Rubidium acts like a microsphere because after it crosses the capillary membrane, it becomes charged. Once it has a positive ionic charge, it will not re-cross the capillary membrane to return to the blood supply. A fixed amount of rubidium is generally injected into an arm vein. It circulates through the heart, to the lung, back to the heart, and then to the rest of the body including the coronary arteries. Rubidium that enters the coronary arteries eventually passes to capillaries, where it can cross the membrane into the tissue, change charge state, and become trapped. The amount that crosses the membrane depends on the amount that is delivered, which is by definition blood flow. So, an image of the rubidium distribution is very highly correlated with the amount of blood flowing into each gram of tissue.

Unfortunately, rubidium is not a perfect microsphere analogue. Some of the rubidium will stay in the capillary and pass directly through the tissue. Also, the change in charge state is not instantaneous after it crosses the capillary membrane. In the time before it becomes charged, some rubidium will re-cross the capillary membrane, re-enter the blood supply to leave, and never be seen again. Because of this, the amount of rubidium trapped is less than one would expect if it behaved truly as a microsphere. The situation is worse at high flow; when blood is traveling faster, the tracer spends less time in the capillary and therefore has less chance to cross the capillary membrane. That is, the greater the flow, the greater the underestimate of that flow as measured by rubidium. Search the Internet for “82Rb flow-dependent extraction” for more details.

Many investigators have taken this behavior into account when using rubidium to measure myocardial blood flow for the purpose of research studies.911 In general, in these types of studies, a group of patients with a particular disease or in a particular flow state are scanned and then averaged together. These patients are then compared with another group of patients to determine if there is a difference in flow. With this technique, researchers have found that there are flow-dependent effects caused by different drugs and different lifestyles. See, for example, references 12 to 15.

An area of current research is to determine whether this technique is valid and clinically useful in individual patients. Because of the difficulties mentioned, the calculation of absolute blood flow has a fairly large error associated with it. If absolute blood flow can be reliably determined in individual patients, it could have a large impact in the assessment of coronary artery disease. It would make identification of triple-vessel disease apparent, and it would permit quantitative longitudinal studies of individual patients. In this way, the effectiveness of either a medical or invasive treatment could be better monitored.

Fluorodeoxyglucose

FDG is used to measure carbohydrate metabolism in myocardial tissue. This is only one of the energetic pathways used by the heart (fatty acid metabolism is another), so depending on the state of the heart, FDG may not measure the total energy budget. For information on estimating fatty acid metabolism with PET, search the Internet for “myocardial fatty acid PET.” FDG has several unique properties that make it ideally suited to measurement of carbohydrate metabolism. The critical ones are that (1) the extraction across the capillary membranes during a single pass is relatively small; (2) the clearance of FDG from the body through the kidneys is relatively slow; (3) when FDG enters the metabolic pathway, it is trapped in cardiac tissue (nearly) irreversibly; and (4) FDG that does not enter the metabolic pathway re-enters the circulatory system and is eventually cleared from the body by the kidneys. In some sense, FDG acts like a microsphere because it is irreversibly trapped, but the trapping is due to a metabolic process rather than a flow process.

The four properties of FDG are important for the following reasons. The first two lead to the uptake of FDG being flow independent. The extraction is low; typically only 10% of the FDG crosses the capillary membrane during a single pass. If the extraction were high, similar to rubidium, the amount of FDG trapped in the tissue would be more a function of how much is delivered than of how much is metabolized. Regions of high flow would get more FDG regardless of their metabolic rate. Because the extraction is relatively small, the FDG needs to remain in the body for a suitable length of time for all tissues to have equal access to FDG. The kidneys clear FDG from the body relatively slowly, permitting FDG to make many circulations through the body before it is removed. This permits all tissues to have equal access to FDG regardless of the amount of blood flow to them. On the other hand, the clearance from the body cannot be so slow that activity remains in the blood throughout the study, leading to high background. The clearance of FDG by the kidneys happens to be nicely situated between too fast to allow equal uptake and too slow to permit high-contrast images.

In a typical FDG experiment, activity is injected and becomes trapped in tissue proportionally to glucose metabolic rate during approximately 15 minutes. The remainder of the activity is cleared by the kidneys during the next 15 to 25 minutes. Hence, by approximately 45 minutes after the injection, there is very good contrast between activity in the myocardial tissue and low activity in the blood, and the uptake is very highly correlated with the glucose metabolic rate.

PROTOCOLS

Prescreening, instruction and consent, preparation, and comfort are important components to a successful imaging examination that provides high medical value to the patient. The patient’s history should be reviewed before the examination to determine if specific instructions are required or if specific demographic information is missing. Information required for a successful PET/CT cardiac examination includes the patient’s height and weight, medications, dietary state, diabetic state, recent medical history pertaining to chest discomfort, and previous interventions or procedures. The following information represents the adopted protocols of the authors’ institution and reflects the specific situations and demographics of the surrounding population of patients.

Dietary Restrictions

Viability Imaging

Recommended dietary restrictions for assessment of myocardial viability are similar to those detailed for perfusion imaging. Insulin-dependent patients require more rigorous monitoring to stabilize blood glucose levels. It is desirable for the myocardium to preferentially use glucose during the uptake of FDG. This is accomplished by fasting followed by a glucose loading protocol.

Preparation and Positioning of the Patient

On the patient’s arrival, the dietary requirements are reviewed and then the patient is asked to change into a hospital gown. The patient is prepared with a peripheral intravenous line placed in an antecubital vein large enough to permit the simultaneous injection of the pharmacologic stress agent and the radiopharmaceutical. Additional monitoring devices to measure baseline vital signs, such as electrocardiographic leads, blood pressure cuff, and oxygen pulse oximeter, can also be fixed to the patient.

Ensuring the patient’s comfort on the scanner bed is essential to minimize artifacts associated with motion of the patient. This care is especially relevant during administration of the pharmacologic stress agent because a second injection of the pharmacologic stress agent is typically not advisable. The patient’s arms should be placed overhead and supported by a cradle or pillows to prevent muscle fatigue. Sometimes overhead is not possible, so strict instructions about stillness should be emphasized. Once the patient is resting comfortably on the table and the monitoring devices are hooked up, protocol instruction is given including breathing instructions for the attenuation correction, timing of injections, administration of pharmacologic stress, and frequency of monitoring. If movement of the patient is detected early in the study, the resting acquisition can be repeated, whereas the stress portion of the study is less flexible when pharmacologic stress agents are employed.

Pharmacologic Stress

It is very difficult logistically to treadmill stress then transport the patient to an imaging camera for PET perfusion imaging because of the short half-lives of the radiopharmaceuticals (82Rb, 75 seconds; 13N, 10 minutes). Therefore, pharmacologic agents are preferred for the stressing portion of perfusion studies and provide some benefits, including reduction in radiation exposure to the imaging staff. Also, some patients may require pharmacologic stress because of an inability to reach a desired level of exercise, and it potentially provides a more consistent level of stress simulation in patient groups that present physical handicaps or myocardial infarction. However, protocols involving drug infusion are generally more complex, requiring calculation of dosage, infusion equipment, timing, additional personnel, and continuous monitoring for adverse effects.

The two most common pharmacologic stress agents are adenosine, a direct vasodilator, and dipyridamole, which inhibits facilitated reuptake of adenosine to indirectly increase the endogenous plasma level of adenosine.17 The primary adverse effects associated with adenosine are flushing, dizziness, chest pain, and bronchoconstriction, which can be relieved by termination of the infusion or intravenous aminophylline. Dipyridamole has similar adverse effects, but the side effects are prolonged, and reversal often requires drug intervention.18 The peak onset is induced within 3 to 7 minutes after infusion with a half-life of approximately 30 to 45 minutes. The choice between adenosine and dipyridamole is generally given to adenosine because of rapid onset of action and short half-life, lower cost per dose, and easier monitoring after study completion.19 Peak vasodilatory effects with adenosine occur after 2 minutes of infusion and return to baseline within 2 minutes after termination.20 The stress perfusion protocols discussed in this section involve the use of adenosine.

An alternative coronary vasodilator is regadenoson, a selective A2A adenosine agonist, recently approved by the Food and Drug Administration for myocardial perfusion imaging.21 Adverse effects are similar to those of adenosine but less severe in some cases because of the weak affinity for the A1, A2B, and A3 subtypes. The agent is packaged as a single dose and administered in a slow bolus (<10 seconds) at fixed concentration irrespective of the patient’s weight. Peak onset is rapid, resulting in an increase in coronary blood flow up to twice resting rate 10 to 20 seconds after injection and decreasing to resting rate within 10 minutes. The slightly longer duration at peak coronary output for this agent compared with adenosine allows a second CTAC after stress.

Rest-Stress Protocols

A number of rest-stress protocols are in use in perfusion imaging. These protocols incorporate several features primarily focused on reducing the total scan time; others are tailored to extract more specific information, such as cardiac function and blood flow. Uptake of perfusion tracers is relatively rapid; the myocardial–blood pool ratio reaches 2 : 1 as soon as 80 seconds after infusion and sometimes as late as 180 seconds (Fig. 23-15). The data acquisition is often delayed after injection to wait for the blood pool to clear and thereby to obtain high-contrast images. The American Society of Nuclear Cardiology guidelines recommend a minimum delay of 90 seconds.22 This technique streamlines the imaging protocol, allowing more consistent throughput. However, this practice can lead to loss of useful counts in patients with fast blood pool clearance or loss of image quality in patients with poor myocardial function or perfusion. In addition, the clearance time between rest and stress studies may vary. Thus, allowing a flexible delay permits more accurate comparison between the two image sets.

PET scanners are moving toward “list mode” acquisition. In this mode, the responding detectors and time of every event are stored in a very large data file. This is another advance made possible by increased computing power and it permits a retrospective decision as to which counts should be combined to form an image. The data collection can begin at the time of injection and later parsed to create optimal images. This is an added processing step compared with traditional data collection. However, its flexibility results in more consistent high-quality imaging at the cost of adding two steps in the image processing protocol. Therefore, the greatest flexibility in the study is achieved with a list mode acquisition, followed by initial fast reconstruction of images at predefined times, then analysis and a second reconstruction with optional timing.

Common perfusion protocols consist of the following components: (1) CT-based transmission scan at rest, (2) rest perfusion examination, (3) pharmacologic stress perfusion examination, and (4) CT-based transmission scan at stress. Two workflows are given for list mode (Fig. 23-16) and sinogram mode (Fig. 23-17) acquisition. The primary difference between the two sequences is that in sinogram mode acquisition, a second resting injection is required with electrocardiographic trigger if gated information is desired. The list mode acquisition stores the electrocardiographic triggers in the event list, which can then be binned retrospectively. This acquisition feature reduces scan time and saves radiation dose to the patient. The CTAC scan protocol is collected under free-breathing conditions to incorporate contractile and respiratory averaging (see the section on CT protocol). The extended duration of certain pharmacologic stress agents, such as regadenoson, permits the collection of an additional CTAC scan after the stress portion of the examination. Last, an optional diastolic gated CT can be incorporated to assess calcium burden by Agatston scoring. This is increasingly common for PET/CT systems sold with 64-slice or more CT scanners.

After the CTAC and Ca scoring examination, the patient table is moved to the PET position and the infusion lines are prepared. The Rb injection should be administered as a slow bolus (10 to 30 seconds). Prolonged infusion times will not allow the blood to clear sufficiently and will degrade image quality. Two infusions are performed with electrocardiographic monitoring, a 7.5-minute rest acquisition started at the time of infusion followed by a 7.5-minute pharmacologic stress acquisition. For example, pharmacologic stress with use of regadenoson is delivered in a 10-second bolus before the stress radiotracer infusion. The data are histogrammed into a specified number of frames, and regions are drawn over the left myocardium and ventricular cavity. These regions are applied identically to all frames and plotted over time to construct a time-activity curve (Fig. 23-18). The time that the blood pool concentration drops to one-half the myocardial concentration is identified and set as the scan delay. The list mode data are then rehistogrammed by starting at the delay point to reconstruct static and gated images. This is the “optimal timing” advantage made possible by list mode data collection.

Before the final reconstruction, PET images can be overlaid on the CTAC to check for proper registration. Registration with the CTAC is performed by using the techniques described before for both the rest and stress data sets. The newly registered CTAC images are saved and used to correct the rehistogrammed data in the final reconstructions.

Viability Protocols

Viability of recovering or dysfunctional myocardium requires both sufficient blood flow and metabolic activity. Therefore, this protocol consists of a resting perfusion study followed by a glucose metabolism study with electrocardiographic monitoring (Fig. 23-19). As previously mentioned, the mobilization of glucose transporters is promoted by shifting the myocardium from using fatty acids to glucose by administration of a glucose load. Plasma glucose levels must be continuously monitored to keep the myocardium in the preferential glucose state (<140 mg/dL). Patients with diabetes mellitus present a particular challenge because of their inability to reliably produce endogenous insulin and their reduced cell response to exogenous insulin stimulation. This may lead to troubles in stabilizing blood glucose levels during uptake after infusion and during imaging. The reader is referred to the American Society of Nuclear Cardiology and Society of Nuclear Medicine guidelines on FDG viability imaging.22

QUALITY CONTROL

A well-designed quality control program involves monitoring of different aspects of the imaging instrumentation on daily, weekly, and monthly intervals. The procedures outlined are essential to maintaining high diagnostic accuracy and anticipating potential equipment failures before they compromise image quality. Scanners can vary considerably in construction geometry and materials. The quality control program should be adapted to the specific characteristics of the system and take into consideration individual manufacturer recommendations. However, the data collection, storage, and reconstruction are more standardized across manufacturers, and a thorough quality control program can be developed that is independent of the hardware configuration. Quality control procedures outlined by task groups (i.e., American Society of Nuclear Cardiology)26 and accreditation agencies (i.e., American College of Radiology, Intersocietal Commission for the Accreditation of Nuclear Medicine Laboratories)27,28 are general and applicable across several configurations. These procedures should be considered in developing a program for your institution.

A thorough PET/CT instrumentation quality control program cannot anticipate all image quality problems. The patient is potentially a large source of visually complex image artifacts that may be indistinguishable from true myocardial defects. Artifacts are often associated with multiple mechanisms, including motion of the patient, poor count statistics, poor blood pool clearance, CT image artifacts, and improper scatter correction. The largest contributor of poor image quality is motion of the patient that results in erroneous attenuation correction. These events lead to areas of artificially high and low uptake in the image.

The following section provides details of common instrumentation quality control procedures and examples of image quality problems observed in scanning of patients.

PET/CT Quality Control

From a software standpoint, the operation of a PET/CT scanner is completely integrated, giving the sense of a common unified gantry. From a hardware point of view, the PET and CT systems are completely autonomous. Thus, it is crucial that a quality control program includes an independent evaluation of the individual PET and CT systems and an evaluation of their combined use. Because of the construction of the scanner, the CT and PET portions of a study cannot be acquired simultaneously. The following quality control procedures are standard for several manufacturers and found in several guidelines, such as those of the American Society of Nuclear Cardiology.26 Daily procedures are best performed in the morning before clinical scanning to anticipate potential problems with scanning of patients.

PET Quality Control

A baseline performance evaluation of the scanner following the National Electronic Manufacturers Association (NEMA) procedures is recommended.29,30 These measurements should be performed by the installation engineers and hospital staff physicist. The NEMA performance evaluation includes standards for measurements of count rate, resolution, and contrast that provide objective criteria for comparing the scanner with published manufacturer specifications. It will also establish the baseline performance of the camera, from which the user can document changes that occur over time. The NEMA performance measurements should be conducted after installation of a new scanner and after major hardware upgrades. Good practice should involve a yearly NEMA performance evaluation to be compared with the baseline performance evaluation.

Combined PET and CT Quality Control

Errors present in the individual PET and CT quality control procedures propagate to the final attenuation-corrected PET data. Aside from the procedures before, the sequential acquisition of the CT and PET examinations presents additional challenges associated with registration. If the individual PET and CT daily quality controls yield acceptable results, misregistration may still occur from sources of misalignment after bed motion and deflection of the cantilevered scanner bed. The latter source can be difficult to anticipate as the deflection of a cantilevered scanning bed will be dependent on the patient’s weight. These differences lead to errors in localization on fused image sets.

Mechanical alignment of the CT and PET gantries is difficult to perform within the precision of PET/CT spatial measurements. It is much easier to make small changes to align the CT and PET reference frames in software by rigidly transforming one matrix to match the other. For this reason, manufacturers often provide a phantom and automated routine to check the alignment and to calculate an offset that would be applied to one of the image reference frames during the reconstruction. If no phantom exists, it is recommended that at least four point sources be placed in air in different planes. To make these point sources opaque to the CT, they can be loaded with diluted CT contrast media. In performing these measurements, it is recommended that a weight be placed on the edge of the table closest to the scanner to simulate any possible bed deflection. Once the phantom is arranged on the bed, acquire a CT image and static PET image. Load them into a fused workstation and visually check the alignment.

Accuracy of the attenuation correction requires correct transformation of CT number to 511 keV attenuation coefficients, where 0 HU is set to 0.096/cm. Potential errors in attenuation correction are best assessed by reconstructing the daily PET quality control image with the accompanying CT image used to check uniformity. The reconstruction parameters should be the same as those used for clinical scans. On any slice in the phantom field of view, the activity profile across the diameter should be flat. Concave or convex profiles are a sign of inaccurate attenuation correction.

Image Quality Control

Motion of the patient is a potentially large source of image degradation requiring careful inspection of the image data. The effects on the image of motion by the patient are blurred contours and mismatch between the emission and transmission image data. This situation can often be remedied by collecting a second emission scan, which is feasible for perfusion imaging agents and FDG for viability. Overall, proper instruction to the patient, ensuring the patient’s comfort during the imaging course, and attention to motion of the patient during the rest-stress acquisitions can make the relative occurrence of this quality problem very small. Nonetheless, the position of the patient should be monitored throughout the study by the laser alignment system supplied with the scanner.

Metal Implants

Implanted metallic components associated with implantable cardioverter-defibrillators and pacemaker leads pose a potential problem when they are located near the myocardium.31 Devices such as these have high mass attenuation coefficients due to their strong photoelectric absorption at x-ray photon energies, resulting in high CT values (>200 HU). However, the interaction of 511-keV gammas with metallic implants occurs primarily through Compton scattering; therefore, the attenuation varies little with that of water. Employing bilinear conversion schemes overestimates the attenuation coefficients of metallic implants at 511 keV, which then propagate to the PET image (Fig. 23-22). Several metal artifact algorithms have been introduced to compensate for the misclassification of metal implants in the creation of PET attenuation maps.3234

Elevated Activity in Inferior Wall at Stress

At stress, the increase in lung volume causes dilation of the diaphragm and a downward shift of the heart. Therefore, when a rest CTAC is used to correct the stress study, a large portion of the diaphragm is present in the transaxial slices of the inferior portion of the myocardium. This results in an overcorrection of the attenuation in those planes and an artificially high activity concentration in the emission study. Most often, the data are scaled for display to the hottest regions, which can lead to the lateral or superior regions appearing decreased (Fig. 23-24).

REFERENCES

1 Knoll GF. Radiation Detection and Measurement. New York: John Wiley & Sons; 1979. p 694

2 Loghin C, Sdringola S, Gould KL. Common artifacts in PET myocardial perfusion images due to attenuation-emission misregistration: clinical significance, causes, and solutions. J Nucl Med. 2004;45:1029-1039.

3 Martinez-Möller A, Souvatzoglou M, Navab N, et al. Artifacts from misaligned CT in cardiac perfusion PET/CT studies: frequency, effects, and potential solutions. J Nucl Med. 2007;48:188-193.

4 Martinez-Möller A, Zikic D, Botnar RM, et al. Dual cardiac-respiratory gated PET: implementation and results from a feasibility study. Eur J Nucl Med Mol Imaging. 2007;34:1447-1454.

5 Nye JA, Esteves F, Votaw JR. Minimizing artifacts resulting from respiratory and cardiac motion by optimization of the transmission scan in cardiac PET/CT. Med Phys. 2007;34:1901-1906.

6 Pan T, Mawlawi O, Luo D, et al. Attenuation correction of PET cardiac data with low-dose average CT in PET/CT. Med Phys. 2006;33:3931-3938.

7 Schuster DM, Halkar RK, Esteves FP, et al. Investigation of emission-transmission misalignment artifacts on rubidium-82 cardiac PET with adenosine pharmacologic stress. Mol Imaging Biol. 2008;10:201-208.

8 Watson CC, Newport D, Casey ME. A single scatter simulation technique for scatter correction in 3D PET. In: Grangeat P, Amans JL, editors. 1995 International Meeting on Fully Three-dimensional Image Reconstruction in Radiology and Nuclear Medicine. Dordrecht: The Netherlands, Kluwer Academic, 1996.

9 Hutchins GD, Schwaiger M, Rosenspire KC, et al. Noninvasive quantification of regional blood flow in the human heart using N-13 ammonia and dynamic positron emission tomographic imaging. J Am Coll Cardiol. 1990;15:1032-1042.

10 Hutchins GD. What is the best approach to quantify myocardial blood flow with PET? J Nucl Med. 2001;42:1183-1184.

11 Krivokapich J, Smith GT, Huang SC, et al. 13N ammonia myocardial imaging at rest and with exercise in normal volunteers. Quantification of absolute myocardial perfusion with dynamic positron emission tomography. Circulation. 1989;80:1328-1337.

12 Campisi R, Nathan L, Pampaloni MH, et al. Noninvasive assessment of coronary microcirculatory function in postmenopausal women and effects of short-term and long-term estrogen administration. Circulation. 2002;105:425-430.

13 Czernin J, Barnard RJ, Sun KT, et al. Effect of short-term cardiovascular conditioning and low-fat diet on myocardial blood flow and flow reserve. Circulation. 1995;92:197-204.

14 Laine H, Nuutila P, Luotolahti M, et al. Insulin-induced increment of coronary flow reserve is not abolished by dexamethasone in healthy young men. J Clin Endocrinol Metab. 2000;85:1868-1873.

15 Mellwig KP, Baller D, Gleichmann U, et al. Improvement of coronary vasodilatation capacity through single LDL apheresis. Atherosclerosis. 1998;139:173-178.

16 Bacharach SL, Bax JJ, Case J, et al. PET myocardial glucose metabolism and perfusion imaging: Part 1—Guidelines for data acquisition and patient preparation. J Nucl Cardiol. 2003;10:543-556.

17 Iskandrian AS, Verani MS, Heo J. Pharmacologic stress testing: mechanism of action, hemodynamic responses, and results in detection of coronary artery disease. J Nucl Cardiol. 1994;1:94-111.

18 Leppo JA. Comparison of pharmacologic stress agents. J Nucl Cardiol. 1996;3(pt 2):S22-S26.

19 Holmberg MJ, Mohiuddin SM, Hilleman DE, et al. Outcomes and costs of positron emission tomography: comparison of intravenous adenosine and intravenous dipyridamole. Clin Ther. 1997;19:570-581.

20 Wilson RF, Wyche K, Christensen BV, et al. Effects of adenosine on human coronary arterial circulation. Circulation. 1990;82:1595-1606.

21 Thompson CA. FDA approves pharmacologic stress agent. Am J Health Syst Pharm. 2008;65:890.

22 Schelbert HR, Beanlands R, Bengel F, et al. PET myocardial perfusion and glucose metabolism imaging: Part 2—Guidelines for interpretation and reporting. J Nucl Cardiol. 2003;10:557-571.

23 Radiation dose to patients from radiopharmaceuticals. Ann ICRP. 1987;18:1-29.

24 deKemp R, Beanlands R. A revised effective dose estimate for the PET perfusion tracer Rb-82. J Nucl Med Meeting Abstracts. 2008;49(MeetingAbstracts_1):183P-b.

25 Radiation Dose to Patients from Radiopharmaceuticals, vol 18/1-4. ICRP, 1988. Publication 53.

26 Nichols KJ, Bacharach SL, Bergmann SR, et al. Instrumentation quality assurance and performance. J Nucl Cardiol. 2006;13:e25-e41.

27 PET Phantom Instructions for Evaluation of PET Image Quality/ACR Nuclear Medicine Accreditation Program. Reston, VA, American College of Radiology, 2009.

28 EC Standard 61675-1. Radionuclide Imaging Devices—Characteristics and Test Conditions. Part 1. Positron Emission Tomographs. Geneva: International Electrotechnical Commission, 1998.

29 Daube-Witherspoon ME, Karp JS, Casey ME, et al. PET performance measurements using the NEMA NU 2-2001 standard. J Nucl Med. 2002;43:1398-1409.

30 Watson CC, Casey ME, Eriksson L, et al. NEMA NU 2 performance tests for scanners with intrinsic radioactivity. J Nucl Med. 2004;45:822-826.

31 DiFilippo FP, Brunken RC. Do implanted pacemaker leads and ICD leads cause metal-related artifact in cardiac PET/CT? J Nucl Med. 2005;46:436-443.

32 Yu H, Zeng K, Bharkhada DK, et al. A segmentation-based method for metal artifact reduction. Acad Radiol. 2007;14:495-504.

33 Kennedy JA, Israel O, Frenkel A, et al. The reduction of artifacts due to metal hip implants in CT-attenuation corrected PET images from hybrid PET/CT scanners. Med Biol Eng Comput. 2007;45:553-562.

34 Hamill JJ, Brunken RC, Bybel B, et al. A knowledge-based method for reducing attenuation artefacts caused by cardiac appliances in myocardial PET/CT. Phys Med Biol. 2006;51:2901-2918.