General and Historical Considerations of Radiotherapy and Radiosurgery

Published on 26/03/2015 by admin

Filed under Neurosurgery

Last modified 26/03/2015

Print this page

rate 1 star rate 2 star rate 3 star rate 4 star rate 5 star
Your rating: none, Average: 0 (0 votes)

This article have been viewed 3453 times

CHAPTER 247 General and Historical Considerations of Radiotherapy and Radiosurgery

In the little more than a century since its discovery, ionizing radiation has become an indispensable tool in neurosurgical practice. The past 113 years have seen significant growth in our understanding and use of ionizing radiation to treat neurosurgical disorders. Therapeutic radiation interacts with cellular components at the subatomic level causing DNA damage, either through directly ionizing action or through indirectly ionizing events mediated through free radical formation.13 Currently, radiation therapy is most commonly administered as fractionated radiotherapy (FRT), whereby modest doses from a limited number of beams are applied daily for several weeks, and as stereotactic radiosurgery (SR), whereby in a just a few sessions,15 much larger doses are delivered to a precisely defined target from a large number of fixed or rotational beams.4 Although not currently in widespread use, interest continues in more specialized techniques, such as brachytherapy and particle-beam radiation therapy. In brachytherapy, high doses are delivered internally and continuously by the implantation of radioactive isotopes, and in particulate irradiation, advantage is taken of the unique physical and radiobiologic properties of cyclotron or reactor-generated particles, such as protons, neutrons, and carbon and helium nuclei.

FRT has been proved to extend the lives of patients with malignant gliomas57 and metastatic brain and extradural spine tumors812 at the prospective randomized clinical trial level (class 1 evidence).13 It can prolong local control for patients with benign brain tumors such as meningiomas,14,15 pituitary adenomas,16,17 craniopharyngiomas,18,19 and schwannomas.20,21 SR has reached class 1 evidence level for metastatic brain tumors.22,23 It is showing tremendous promise for patients with benign brain tumors, including schwannomas,24,25 meningiomas,26,27 pituitary adenomas,2831 craniopharyngiomas,32 and glomus tumors.33,34 In addition, SR is unique in its ability to obliterate vascular malformations35,36 and treat functional conditions such as trigeminal neuralgia,37,38 movement disorders,39,40 and certain select epilepsy41 and psychiatric disorders.42,43 Brachytherapy is currently used to control the secretions of tumor cyst walls4446 as well as in select brain tumor resection cavity settings.4749

Although the field of radiation oncology shares a common ancestry, with (diagnostic) radiology dating back to Roentgen,50 Becquerel,51 and Curie,52 diverging technologic, biologic, and professional developments gave rise to separate fields by mid-century. To a great extent, the history of radiation oncology in the 20th century involved the search for ways to boost efficacy of treatment through the development of technology to deliver ever higher doses at greater tissue depth or to minimize collateral damage by delivering dose more accurately. Among the many developments, four general themes emerge: (1) the evolution of more powerful radiation generators capable of producing beams sufficiently penetrating to treat deep-seated tumors, (2) the elucidation of the principles of radiation biology, (3) the application of increasingly sophisticated imaging and computational technology, and (4) the search for novel forms of radiation.

The Beginning

X-rays were discovered by Roentgen in 1895,50 natural radioactivity by Becquerel in 1896,51 and radium by the Curies in 1898.52 Recognizing the potential of these new forms of radiant energy, not only for imaging but also for therapy, physicians soon applied them to malignant disease. The first empirical treatment of a patient with cancer (advanced breast cancer) occurred in 1896,53 and the first cancer patient was cured by radiation (basal cell skin cancer) in 1899,54 only 4 years after Roentgen’s discovery. Isodose lines were already being used for x-ray therapy by the early 1920s.55 Although Stenbeck was the first to treat cancer with multiple doses,56 it was not until Coutard’s work that radiation therapy became recognizably modern, featuring multiple doses (fractions) externally applied using radiation beams (Fig. 247-1A).57,58

The first use of ionizing radiation to treat primary brain tumors paralleled the advent of neurosurgery as a separate specialty and antedated the dawn of FRT. In 1909, Gramegna treated a patient with acromegaly using x-rays and noted visual improvement.59 The first neurosurgical use of brachytherapy was also for acromegaly in 1912, when Hirsch placed radium into the sella turcica after transsphenoidal tumor resection.60 Throughout the 1920s, Harvey Cushing used “Roentgen therapy” for select cases of gliomas and medulloblastomas,6164 as well as pituitary adenomas,61,64,65 and even tried using it for cerebral angiomas (arteriovenous malformations [AVMs]).61,63,66 In 1931, Ernest Sachs reported on a technique of intraoperative x-ray therapy designed to take advantage of the lack of intervening skull and scalp.67 The first report of using FRT for metastatic brain tumors was by Chao and colleagues in 1954,68 and a subsequent report was made by Chu and Hilaris in 1961 (see Fig. 247-1A).69

The Search for Energy And Penetration

The earliest generation of radiation units based on vacuum tube technology were capable of producing low-energy x-rays only suitable for treating superficial targets such as skin cancers and small lymph nodes. The Coolidge tube (140 kV), developed in 1913, was the first step toward a consistent and reliable therapeutic x-ray machine (Fig. 247-2).70 A 200-kV machine became available in 1922 (see Fig. 247-1A).54 These earliest x-rays machines suffered from poor tissue penetration and high rates of superficial skin burns. The units of subsequent generations, such as Van de Graaf generators, cyclotrons, synchrocyclotrons, betatrons, and bevatrons, were eventually capable of producing high-energy x-rays but were impractical because of high cost, low beam output, or other technical factors (Table 247-1; see Fig. 247-1A).

TABLE 247-1 Photon (X-Ray and Gamma Ray) Therapy Categories and Energies

NAME PHOTON ENERGY
Kilovoltage therapy 20 to 200 kV
Orthovoltage therapy 200 to 500 kV
Supervoltage therapy 500 to 1000 kV
Megavoltage therapy >1000 kV to ≥1 MV

The modern megavoltage teletherapy era began in the 1950s with the first commercially available teletherapy units based initially on radium-226 then subsequently on cobalt-60 (1.2 to 1.25 MV).71 At this energy, photons could reach any depth in the human body with residual energy high enough to have an ionizing therapeutic effect. The first commercially available cobalt teletherapy machine became commercially available in 1951, and 1120 machines were sold to hospitals over the first 10 years.71

The first linear accelerator (LINAC) designed for therapeutic use was installed in the United Kingdom in 1943.72,73 However, LINACs did not become commercially available until 1953, and the first therapeutic LINAC was not installed in the United States until 1957.74 Unlike their vacuum tube or isotope-based predecessors, LINACs produce high-energy x-rays by accelerating electrons and directing them onto a tungsten target where x-rays are generated through a process known as bremsstrahlung (Fig. 247-3). Unlike isotope-based teletherapy, LINACs have the advantage that their radiation source does not decay over time and are capable of even higher-energy, more penetrating beams than isotopic sources (up to 24 MV compared with1.25 MV for cobalt-60) with the potential for also producing electron beams. Their superior beams (initially able to reach energies of 1.2 to 4 MV and now even as high as 24 MV), their finer handling characteristics, and elimination of the need to deal with Nuclear Regulatory Commission oversight led to the eventual eclipse of cobalt-60 teletherapy in the United States by the late 1960s (see Fig. 247-1B).

The Establishment of Consistency and Reproducibility

Major landmarks in the history of moving from modern megavoltage teletherapy toward therapeutic standardization were the establishment of the American Society for Therapeutic Radiology and Oncology (ASTRO) in 1958,75 the International Society for Research in Stereoencephalotomy in 1961 (now known as the American Society for Stereotactic and Functional Neurosurgery as well as the Joint Section for Stereotactic and Functional Surgery of both the American Association of Neurological Surgeons and the Congress of Neurological Surgeons76), the European Organisation for the Research and Treatment of Cancer (EORTC) in 1962,77 and the Radiation Therapy Oncology Group (RTOG) in 1967.78

In the United States, the RTOG had particular influence on this process. It is currently a National Cancer Institute–funded multidisciplinary cooperative group that runs prospective randomized clinical trials (RCTs) that include ionizing radiation in the treatment of cancer. It was a series of RTOG RCTs in the 1970s and early 1980s that established the standard fractionation schemes still in use for metastatic brain tumors8,10 and contributed to establishing the fractionation schemes still in use for primary brain tumors. RTOG has also had an influence on standard practices for SR.23,79,80 In the United States, it is largely the empirical data from RTOG RCTs, coupled with the professional education efforts of ASTRO and the neurosurgery national societies and sections, that have led to standard and reproducible clinical results using ionizing radiation to treat neurological conditions at most medical centers across the country.

Emergence of Radiobiology and Limitation of Radiation Injury

In 1934, Coutard established fractionation as the preferred radiotherapy modality, ushering in the modern practice of FRT.58 In his work with laryngeal cancer, fractionation—the administration of small doses repetitively over time—enabled the delivery of higher total doses than was possible with single large fractions so that therapeutic doses could be given without excessive normal tissue toxicity (skin burns). The importance of understanding radiobiology thus emerges as the second major theme of radiation oncology history. This was critical to limit patient toxicity without sacrificing optimal therapeutic effect. However, experimental study and theory would take time to catch up with the essentially empirically derived technique of fractionation in order to provide the rationale for the dose-fractionation schedules already standardized in the clinic.

The emergence of the modern study of radiobiology can be traced back at least as far as 1953 when Gray carried out the first studies on oxygen and radiation-induced growth inhibition81 (see Fig. 247-1B). In 1965, Elkind and associates discovered sublethal damage repair and linked this finding to dose fractionation.82 As it related to the central nervous system (CNS), the 1970s and 1980s were the key decades for autopsy, experimental animal, and in vitro work, with seminal contributions by Sheline,83,84 Fowler,85 Hall,74,86,87 and many others (Fig. 247-1C).

Key concepts emerged that are discussed in detail in Chapter 249 on radiobiology in this textbook; these include Sheline’s formulation of differential tissue effects, including early acute reactions, early delayed reactions, and late delayed reactions and the corresponding separation of tissue into early- and late-responding tissue,83,84 and the classic four Rs (repair, reassortment of cells within the cell cycle, repopulation, and reoxygenation) of radiobiology (Table 247-2).84 Further theoretical refinements involved modeling biologic effects through isoeffect plots including the empirically derived nominal single dose (NSD) and time-dose fractionation (TDF) model88,89 and the linear quadratic formula.9093 In particular, insights derived from the linear quadratic formula’s empirically derived α/ß ratios helped suggest dose-fractionation schedules to either enhance tumoricidal efficacy or to diminish normal tissue toxicity. The simultaneously accumulating empirical RCT data from the RTOG, as well as the EORTC, among others, eventually confirmed the translational validity and utility of these experimentally derived concepts.

TABLE 247-2 The Four Rs of Radiobiology

CONCEPT RATIONALE
Reoxygenation Hypoxic cells or hypoxemic areas within tumors are relatively more resistant to a given dose of radiation. Dynamic biologic changes within the tumor suggest that cells that are hypoxic during one fraction may be less so during subsequent fractions, and fractionation will thus increase the chances of desired effect on the largest number of cells.
Reassortment A given dose of photons is most likely to irreversibly damage DNA if the cell is in mitosis and the DNA is condensed as chromosomes. Cells that are not in mitosis during one fraction may be so during subsequent fractions, so fractionation will increase the chances of desired effect on the largest number of cells.
Repair The time between fractions allows for repair of sublethally damaged cells before the next dose. This is an advantage for fractionation only if normal tissue in the treatment volume is more efficient at this process than tumor cells, which is usually the case.
Repopulation The time between fractions allows for replacement of lost cells before the next dose. This is an advantage for fractionation only if normal tissue in the treatment volume is more efficient at this process than tumor cells, which may or may not be the case for a given tumor type.

Neurosurgery played an important part in this work, particularly as it related to assessing single-dose tolerance of normal CNS tissue for SR. Kjellberg developed his 1% radiation necrosis risk isoeffect line by extrapolating from a combination of autopsy and animal data.94 This was later improved on by Flickinger’s more sophisticated 3% radiation necrosis risk isoeffect curve for SR using the linear quadratic formula.95

Imaging and Targeting

It is axiomatic that to optimally hit a target you must be able to accurately see it—advances in imaging represent the third major theme in radiation oncology history. However, until the advent of tomographic imaging with computed tomography (CT) in the late 1970s and magnetic resonance imaging (MRI) in the late 1980s, the targeting of CNS lesions was somewhat problematic. Up to that point, the state of the art consisted of nuclear brain scans, air ventriculograms, angiograms, and the craniotomy defect and standard anatomic landmarks on skull x-rays. In fact, glycerol rhizotomy for trigeminal neuralgia derived from early attempts to instill radiopaque tantalum powder into the trigeminal cistern so that the ganglia could be imaged for SR (glycerol was originally only the vehicle for the tantalum powder used for imaging).96

Indeed, before the late 1980s, it was difficult to know whether the inconsistent or poor FRT outcomes were from resistant tumors or just poor targeting. Uncertainties in localization required treatment plans with large margins, increasing neurotoxicity. For the purposes of radiation planning, the reconstruction of space was as much art as science. As late as 1990, it was possible for a planning technique dependent on overhead projectors, push-pins, and wax pencils to be presented at national conferences.97 Also as late as 1990, it was not uncommon for radiation oncologists at the University of Pittsburgh to walk into the neuroradiology reading room with a lateral scout skull film and ask the neuroradiologist to please draw the limits of the tumor derived from CT or MRI onto the film with a wax pencil for target planning purposes (Fig. 247-4).

The neurosurgical concept of stereotaxis for three-dimensional space registration was a key evolutionary advance adopted by FRT. Although frame-based stereotaxis had been widely used for SR since it was conceived in 1951,98 it was the development of frameless stereotaxy with virtual reconstruction of anatomic space, three-dimensional surface coregistration, and reliable tracking navigation techniques that allowed FRT planning to achieve consistent and reliable targeting accuracy. The first neurosurgery proprioceptive arm frameless stereotaxy system was not approved by the U.S. Food and Drug Administration until 1993,99 and the first infrared optical tracking systems100 did not become commercially available for neurosurgery until 1996. Applications of these concepts and this technology to FRT began soon thereafter (see Fig. 247-1C). Currently, FRT applications of frameless stereotaxic localization and tracking rely on relocatable custom-molded fixation masks. Although almost all current systems are based on CT targeting, image fusion techniques have allowed MRI data and even molecular-metabolic (i.e., positron emission tomography [PET]) imaging data to be secondarily incorporated.

Computation Advances

Computers began to be introduced into FRT planning in the late 1970s,71 but it was not until the late 1980s and early 1990s that they found widespread application. The coupling of planar tomographic imaging and stereotaxis with treatment planning software designed to take advantage of the increasingly more powerful computational engines made three-dimensional voxel-by-voxel calculation possible. Before this, two-dimensional FRT planning used single-plane dose calculations based on hand-measured external contours. Only crude blocking with metal alloys poured into Styrofoam molds based on orthogonal x-rays were possible, and typically only a limited24 number of beams (i.e., ports) were possible.

With the advent of three-dimensional treatment planning, dose distribution could be calculated, and therefore optimized, in all three dimensions. More beams could be used, including non-coplanar beams, because the target and normal tissue anatomy could be reconstructed from any orientation in a “beam’s eye view” (BEV). Dose could be better sculpted to the three-dimensional target volume, which was directly visualized on the target images so that normal structures could be more easily excluded. The more powerful planning systems also made practical the calculation and analysis of dose-volume histograms for which the impact on clinical practice is still being assessed. Moreover, the more powerful computational engines also made practical (i.e., faster) more sophisticated beam-modeling algorithms such as superposition-convolution and fast Monte Carlo dose calculations, although the ultimate goal of true Monte Carlo dose calculation has not yet become routine.101107 In SR, separate isocenter interactions, as well as dynamic rotational arc isocenter contributions, could be rapidly and accurately calculated for the first time, opening up another level of SR planning (Fig. 247-5). Indeed, mobile LINAC SR and complex multi-isocenter treatment planning for fixed-position SR units became practical for the first time.

Inverse planning is the most recent advance in treatment planning and is critically dependent on the computational power of computers (Fig. 247-6). Before this development, beam design was empirical, starting from an initial estimate of beam number and orientation and then progressively refined through a process of iterative assessment and redesign. Often, optimization relied on the individual experience and persistence of those planning. In contrast, with inverse planning, the end-result dose distribution, dose-volume constraints, and dose limits to nearby structures are specified first, followed by automated beam optimization (number of beams, their relative weights and intensities, and customized beam shapes) through the repeated analysis and refinement of iteratively generated treatment combinations. Inverse planning yields the ultimate in FRT patient customization for the constraints and capabilities of each modern radiation delivery system.

Robotic Positioning and Automated Collimation

A higher number of smaller and more finely controlled beams make possible finer gradations in dose-target conformality. Although important and useful for all therapeutic radiation delivery, it is most critical for SR, in which lack of conformality has the greatest potential for toxicity given the high individual doses involved. For example, the Gamma Knife (Elekta AB, Stockholm, Sweden) uses 192 to 201 separate, simultaneous, static cobalt-60 sources as well as multi-isocenter planning to achieve this goal. However, conventional LINACs have only a single radiation source mounted on a single-plane gantry so that serially adding beams can extend treatment times beyond the practical. One method to increase the number of beams is the robotic automation of LINAC source movement, which marked a major advance by increasing the potential beam directions or “nodes.” With the CyberKnife (Accuray, Inc., Sunnyvale, CA), a small uncollimated LINAC mounted on an industrial robotic arm uses up to 200 separate nodes treated serially per session to achieve one treatment volume; although because of time and logistic constraints, the number of nodes actually used rarely exceeds 60.

A multileaf collimator (MLC) consists of shielding material shaped into shutters, slats, or vanes positioned between the x-ray source and the target in order to determine beam shape by subtracting portions of the field (Fig. 247-7

Buy Membership for Neurosurgery Category to continue reading. Learn more here