Basic Three-Dimensional Postprocessing in Computed Tomographic and Magnetic Resonance Angiography

Published on 13/02/2015 by admin

Filed under Cardiovascular

Last modified 22/04/2025

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 1609 times

CHAPTER 83 Basic Three-Dimensional Postprocessing in Computed Tomographic and Magnetic Resonance Angiography

The three-dimensional data acquired by computed tomography angiography (CTA) or magnetic resonance angiography (MRA) can be processed off-line using a variety of commercially available techniques that enable isolation and improved viewing of specific vascular segments and their anatomic relationships. The most popular and widely available postprocessing tools for CTA and MRA data are multiplanar reformation (MPR), maximum intensity projection (MIP), and volume rendering (VR).15 This chapter reviews these basic methods for postprocessing of CTA and MRA data, highlighting their strengths and pitfalls. For different clinical applications, the specific methods of highest value will vary. Individual techniques for various anatomic regions are covered elsewhere in this text.

MULTIPLANAR REFORMATION

MPR refers to the reconstruction of three-dimensional data into a new orientation. For example, a volumetric CTA data set acquired in the axial plane can be reconstructed for viewing in a coronal plane or a plane of any obliquity using MPR. MPR images can be reconstructed into linear (e.g., coronal, sagittal, axial, oblique; Figs. 83-1 and 83-2) or curved plane images (Fig. 83-3). The process does not change the original data or voxels. In the MPR process, interpolation is needed to rearrange data into a different coordinate system. Curved reformatting is useful for viewing tortuous vessels such as the coronary arteries on CTA because it unravels the tortuous segments for linear viewing of the vessel (i.e., straightens out the vessel in length). It is particularly useful for showing vascular detail in cross-sectional profile along the vessel length, facilitating characterization of stenoses or other intraluminal abnormalities. The pitfall is that manual definition of curved planes may not be accurate for actual measurements and is potentially a time-consuming process because operator interaction is often required. Automated curve detection methods can expedite processing but fail if there are image artifacts within the data (e.g., motion blurring or high-value nonvascular structures such as calcium on CTA). These may be erroneously labeled as vessel lumen, thereby introducing inaccurate curved vascular lumen reformation.

Nearest Neighbor Interpolation

Interpolation is an important concept in the context of image reconstruction. As demonstrated in Figure 83-4, a slice of an original acquired image is composed of known data points. In this example, the circles are known data grids. Triangles are points that need to be interpolated because of MPR needs. The left known data signal intensity (SI) value is SIa = 100, and right known signal intensity value is SIb = 100. The simplest interpolation is the nearest neighbor. In nearest neighbor interpolation, the new interpolated data point (i.e., triangle) is closer to the right data point, where SIb is, so the unknown interpolated signal intensity (SIi) is set to be the same as SIb, that is, SIi.

MAXIMUM INTENSITY PROJECTION

MIP is another common method for two-dimensional projectional viewing CTA and MRA three-dimensional data. In MIP, the highest signal intensity values are projected from the data encountered by a ray cast through an object to the viewer’s eye (Fig. 83-5). MIP is not averaging the average signal intensities from slice to slice along the projected axis (mean value); instead, it finds the highest value along a projection line and assigns that value to the pixel represented on the two-dimensional projected image. For CTA, the pixel values are in density or Hounsfield units; for MRA, the pixel values are in signal intensity. In this way, three-dimensional data can be collapsed into a two-dimensional projection image (see Fig. 83-2). MIP can be generated using the whole three-dimensional volume or a smaller subvolume (slab, or subvolume MIP; Figs. 83-6 and 83-7).

MIP is useful for projected viewing of CTA and MRA in views similar to conventional x-ray angiograms. MIP images provide the big picture but are often suboptimal for viewing specific segments that are tortuous, such as near vessel origins or bifurcations, where subvolume MIP may be more helpful, especially if there are overlapping vessels such as the celiac artery (Fig. 83-8). Furthermore, because it projects simply brightest value, full-volume or thick MIP processing does not provide depth perception because the three-dimensional data are collapsed for viewing of only the brightest values. Subvolume MIP provides planar viewing of thinner volumes versus single-voxel viewing using MPR but may also mask luminal abnormalities. With MIP, the pitfall is that of a loss in depth information. Therefore, MIP misrepresents anatomic spatial relationships in depth directions. Moreover, MIP shows only the highest intensity along the projected ray. A high-intensity mass such as calcification will obscure information from intravascular contrast material. Vessels with low signal intensity values may be partially or completely imperceptible on MIP images. Alternatively, the bright signal of the lumen may mask subtle detail such as an intimal tear in an arterial dissection.

VOLUME RENDERING

Volume rendering (VR) is a visualization technique that represents a three-dimensional CTA or MRA data set (see Figs. 83-2, 83-7, and 83-8) as an opaque or translucent fashion but preserves the depth information of the data set, which is not afforded by MIP reconstruction (Fig. 83-9). It has replaced most imaging applications such as surface rendering, with the notable exception of interior vessel analysis. VR assigns opacity values based on a percentage classification from 0% to 100% using a variety of computational techniques. It uses all acquired data, so it needs greater processing power than the other applications discussed. Once the data have been assigned percentages, each tissue is assigned a color and degree of transparency. Then, by casting simulated rays of light through the volume, VR generates a three-dimensional image. The color used in volume rendering is a pseudocolor, which does not represent the true optical color of the tissues. However, color enhances the human eyes’ ability to perceive depth relationships (see Fig. 83-3). On VR reconstruction, the three-dimensional data can be visualized using various degrees of opacity and transparency, which afford different viewing perspectives of the image data (Fig. 83-10).

The basic idea of volume rendering is to find the best approximation of the low-albedo volume rendering optical model that represents the relation between the volume intensity and opacity function and the intensity in the image plane. A common theoretical model on which VR algorithms are based is described in equation (2)6:

(2) image

where α is the opacity samples along the ray and C is the local color values derived from the illumination model. All algorithms obtain colors and opacities in discrete intervals along a linear path and composite them in front to back order. The raw volume densities are used to index the transfer functions for color and opacity; thus, the fine details of the volume data can be expressed in the final image using different transfer functions.

Therefore, there are many adjustable parameters in volume rendering, such as window center, window width, color, transparency, degree of opacity, and shading. Different vendors have different standards for parameters as well as core algorithms for volume rendering, so the appearance of rendered images may vary slightly among vendors.

In the volume rendering process, geometric structures from volume data and render volumes are based on fuzzy or percentage classification and are different from, and generally more useful than, images generated using surface rendering. VR is actually a direct volume-rendering (DVR) method. DVR techniques include ray casting, splatting, shear warp, texture mapping, and hardware-driven volume rendering. Each approach has its own advantages and disadvantages. Shear warp and three-dimensional texture mapping volume rendering are devised to maximize frame rates at the expense of image quality and are used for the assessment of dynamic three-dimensional data sets. Image-aligned splatting and ray casting are devised to achieve high image quality at the expense of performance. A combination of these techniques is possible based on specific applications. However, this discussion is beyond the scope of this basic introductory chapter on three-dimensional postprocessing.