An implementation of the x-ray backscatter technique that is a modification of the one described in this blog has been developed by American Science and Engineering. Rather than designed for detecting threat objects on airline travelers, which this narrative is focused on, this new implementation is scaled-up to examine the contents of cargo containers; for example, shipping containers and trucks. The backscatter system is scaled-up by using much higher energy x-rays, plus other modifications, to enable the x-rays to penetrate into much larger objects. Instead of conventional "diagnostic" x-ray machines, this device uses linear accelerators, the same technology that is used for radiation treatment of cancer, and which produces x-rays of approximately 10-20 times higher energy than the airport threat detection devices described in this blog. The picture above shows a truck that attempted to cross the border to the United States. A simple physical examination suggested that the truck contained bananas. However, the center of the cargo area contained a compartment within which 20 or so illegal aliens were attempting to cross the border undetected. The x-ray backscatter image showed the human cargo very clearly.
As Northwest Flight 253 made its final approach to Detroit airport on Christmas Day 2009, a terrorist carrying an unusual form of plastic explosive almost succeeded in killing its 300 passengers and crew.
Because of that incident, U.S. and worldwide airport security efforts were rapidly ramped up. One approach was the installation of so-called body scanners at airports. What are these devices, how do they work, are they effective, and are they safe?
Principles of Operation of Airport Scanners
The body scanners deployed in airports in the U.S. and Europe generally use either x-ray transmission imaging or x-ray backscatter imaging. Over 1,500 of them are now deployed in U.S. airports, with the number rapidly growing.
X-Ray Transmission Imaging
X-ray transmission imaging is probably the most familiar form of x-ray imaging—widely used in medicine as well as for many security applications. It is the type of imaging that produces the familiar chest x-ray. X-ray transmission imaging is effective for detecting guns, bombs, and other threat objects that are made of dense metallic materials. However, plastic explosives, such as Semtex and C4 (frequently used by terrorists), or drugs, are poorly depicted by transmission imaging since they have similar x-ray properties to biological tissue and consequently are poorly visualized against the background image of the body.
Backscatter imaging involves sending a narrow x-ray beam into the body and detecting only those x-rays that scatter in the backward or sideways directions from tissues and threat objects that reside within the superficial 1-2 inches of the body. The x-ray beam is rapidly scanned and the position of the beam on the body at any moment in time is accurately known. The total scattered x-ray signal from the detectors at that same moment in time correlates with the backscattering property of the tissues and/or other objects over an area equivalent to the diameter as the x-ray beam—I.e., approximately 1-2 mm—and a depth up to about 2 inches.
Backscatter imaging has a number of advantages for security applications. Because backscattered x-rays need to pass through only a few inches of the body–-an inch or two on their way in, then an inch or two on their way out–-fewer x-rays are needed than in transmission imaging, where the x-rays have to penetrate through the entire thickness of the body before they can be detected. Consequently, the radiation dose to the body in backscatter imaging is also more than 100x lower than in transmission imaging. In addition, because backscatter detectors can be made very much larger in capture area than transmission detectors—perhaps 1,000x larger--this further reduces the necessary radiation dose to the body by approximately 1000x.
Advantages of Backscatter Imaging for Threat Detection
For equal densities, plastics, plastic explosives, drugs, and soft tissues of the body produce more backscattered x-ray signal than metals, so the former are more clearly depicted in backscatter than in x-ray transmission imaging. The reverse is true in x-ray transmission imaging. For example: Certain weapons, such as some models of the German Glock handgun, are manufactured with a large amount of plastic material to reduce weight. Many models of Glock handguns have plastic hand grips, which are very difficult to see in transmission images but are clearly depicted in backscatter images. Similarly, plastic explosives, even when located among a complicated background of metallic objects, can be clearly seen in backscatter images but are essentially invisible in transmission images.
Possible Health Risks
What are the health risks from x-ray backscatter imaging body scanners? The effective radiation dose from one x-ray backscatter body scan is equal to about 11 nano-Sievert (0.0011 mrem in old-fashioned units). This is equivalent to about 3-4 minutes of natural background dose; in other words, a traveller standing in line for a backscatter body scan would probably receive more radiation dose from natural background radiation than from the scan itself! Therefore, it would require a traveler to have approximately 455,000 body scans in one year to reach the 5 milli-Sievert (500 mrem) annual radiation dose limit set by U.S. Federal government and State regulations for the general public.
Most ionizing radiation generating technologies are designed with the 5 milli-Sievert (500 mrem) annual dose limit to the general public in mind. For example: patients passing through hospital corridors that happen to be adjacent to x-ray rooms, people living near the boundaries of nuclear power plants, strangers being in proximity to patients being treated with radioiodine for thyroid disease, etc., receive radiation doses that are limited under federal and state regulations to a maximum of 0.02 milli-Sievert (2 milli-rem) in any one hour. This number is mathematically linked to the maximum annual dose limits for the general public referred to above.
Put another way, based on the linear-no-threshold (LNT) model used widely for radiation risk assessment, the risk of getting a fatal cancer from one backscatter scan is approximately that of eventually dying from pollution by living in New York City for 1 minute, traveling 100 ft by car, or traveling 1 mile by jet. It would require about 90 backscatter scans to be equivalent in effective dose to one chest x-ray.
A frequent criticism of TSA’s characterization of the radiation doses delivered by x-ray backscatter scanners is that the skin receives much larger doses than the quoted values for total effective dose, since most of the radiation dose is concentrated in the skin. However, TSA’s characterization of the dose from an x-ray backscatter scan is based on the concept of effective dose, a construct used very frequently in epidemiological radiation studies. Under that concept, the risk is calculated from the partial doses received by all organs (including skin), and the corresponding “effective dose” in a form suited to LNT dose calculations is finally calculated. The definition of effective dose, therefore, already takes into account the variable doses delivered to different tissues and organs, taking into account their varying radiation sensitivities.
Non-X-Ray Threat Detection Devices: T-Wave Scanners
An alternate form of threat detection that has been recently developed for use in airports is called “millimeter wave” or “T-wave” body scanning. Instead of using x-rays, this technology use extremely high frequency radio waves in the “terahertz” range—sometimes called T-waves—that are beamed into the traveler’s body and are then differentially reflected by any additional materials that may be concealed in or on it. Although T-wave scanners produce no radiation exposure whatsoever, there are studies showing that T-wave scanners are substantially less accurate in detecting threat objects than x-ray backscatter scanners. But from a public perception perspective, T-wave scanners are an important advance in threat detection technology because of the lack of radiation dose to the traveler.
What about the issue of privacy related to body scanning for threat detection? Indeed, in addition to producing clear images of threat objects, x-ray backscatter scanning is able to provide clear images of the surface of the traveler’s body, together with quite clear depiction of his or her “private parts”. TSA claims that there are various solutions that can “depersonalize” such images. For example: images showing private parts can be automatically blurred prior to display (either the private parts or the facial features of the traveler can be blurred) and TSA staff who examine these images are located in a separate room so that they see only the images and not the individuals being scanned. There are also software applications that automatically search for and flag threat objects in an x-ray backscatter image and only display the full image to an operator if a threat object is identified. Despite these privacy maneuvers, TSA has been sued for violation of the 4th amendment, which has resulted in many backscatter scanners being removed from airports and replaced instead with T-wave scanners. This is unfortunate, since it provides much less protection against terrorist threat.
Backscatter x-ray imaging is a new technology that is aimed at detecting threat objects that would mostly not be visible using the more conventional x-ray transmission imaging approach, such as plastic explosives, drugs, and the non-metallic components of certain models of handguns.
The dose from x-ray backscatter scanning is extremely low; in fact it is virtually negligible. Based on the linear-no-threshold (LNT) model, used widely for radiation risk assessment, the risk of getting a fatal cancer from one backscatter scan is approximately that of eventually dying from pollution by living in New York City for 1 minute, traveling 100 ft by car, or traveling 1 mile by jet. It would require about 90 backscatter scans to be equivalent in effective dose to one chest x-ray. Even if someone travels frequently and receives backscatter scans during every security check, when such very low radiation doses are spread out over time their effect on the body is not linearly cumulative because the body quickly repairs minor X-ray damage when it is protracted in time. Backscatter scanning in tandem with x-ray transmission scanning (often combined in the same apparatus), appears to be the most significant development for reducing the terrorist threat at airports. However, privacy concerns and lawsuits based on 4th amendment issues have resulted in the deactivation of many backscatter scanners in the U.S. and Europe.
T-wave body scanners are an alternative technology recently developed for threat detection. T-wave scanners do not use ionizing radiation (such as x-rays), and produce no radiation dose to the subject. However, at the present time, although embraced by the public because of their total safety and lack of privacy violation (due to the poor quality of the images), T-wave scanners do not appear to have the necessary accuracy or sensitivity for adequate threat detection.
TSA has implemented various solutions to depersonalize x-ray backscatter images, that in addition to depicting threat objects clearly depict the “private parts” of a subject.
Kerala beach, India, which has one of the highest terrestrial background dose rates in the world due to the abundance of thorium-containing monoxite sand. Dose rates are 7 mSv/year (700 mrem/year) compared to average terrestrial dose rates in the U.S. of 0.3 mSv/year (30 mrem/year). Despite the Kerala terrestrial dose rate being approximately 20x higher than in other areas of India, epidemiological studies have not detected elevated cancer rates in residents of Kerala compared to the rest of India.
It has been estimated that about 10% of genetic mutations that occurred during the evolution of human life have been due to the influence of radiation. We are all exposed to natural background radiation on a daily basis: cosmic rays from outer space that interact with our atmosphere and shower us with various secondary radiations; gamma-rays produced by radioisotopes naturally present in the earth; radioactive radon gas that oozes out of the ground and enters our lungs; and two or three radioisotopes that reside naturally in our bodies. Since evolution is driven by genetic mutations, natural background radiation would not appear to be that bad for the development of the human race. However, genetic mutations are a two-edged sword: they help drive evolution according to the processes of natural selection by enhancing the selective survival of “desirable” genes, but they can also cause illnesses such as cancer. This article will consider the latter of these effects of radiation, i.e., those that are potentially detrimental to human health even though they may sometimes be unavoidable.
Radiations we are exposed to
Let’s review in more detail the physical nature of the radiations we are exposed to. These consist of two major categories: electromagnetic radiations, and particulate radiations. Electromagnetic radiations include x-rays and gamma-rays, while particulate radiations include alpha-particles, protons, neutrons, and electrons.
In addition to natural background radiation, referred to earlier, we are also exposed to man-made diagnostic radiations and therapeutic radiations, used widely in the medical area.
Radiations used for diagnosis of disease
Diagnostic x-ray machines take two-dimensional x-ray images of your chest and of many other body parts, whereas highly complex and sophisticated x-ray machines such as CT scanners produce x-ray images of your body in the form of thin (1-4 mm) thick slices, eliminating the problems of tissue overlap that inhibit accurate diagnosis in the plain-and-simple two-dimensional type of x-ray imaging.
Radiations used for therapeutic treatment of disease
Linear accelerators, descendants of radiation-generating equipment used for many decades in physics research, produce very high-energy x-ray and electron beams that are used to treat cancer. Electron beams lack the high “aiming” accuracy possessed by x-rays and gamma-rays, but they weaken and disappear very rapidly at depths beyond the boundaries of relatively shallow tumors, thereby protecting radio-sensitive tissues or organs that may be located downstream of the tumor.
Gamma-rays, produced by radioisotopes encapsulated in rice-sized metallic “seeds” that are inserted into some types of tumor, are often used to treat cancers such as prostate or breast cancer from inside the body, where tumors may be surrounded by particularly radiation-sensitive tissues or organs. A machine called the Gamma-knife uses gamma rays to treat primary brain tumors from outside the body, as well as tumors that have spread (metastasized) to the brain from cancers at other anatomical sites.
The use of protons beams is a relatively new development in radiation therapy. There are currently about 35 sites in the U.S. that offer this form of radiation therapy, largely due to the extraordinary high cost of such facilities: typically $100m-$150m (although more recently developed “single-room” proton facilities are less costly). At the present time, proton beams are most commonly used for treating prostate cancer in adults and brain and spinal cord tumors in children, although other types of cancer are treated as well. Some cancers are difficult to treat with x-rays or gamma-rays because tumors may be surrounded by especially radiosensitive tissues or organs, limiting the amount of radiation that can be delivered to the tumors themselves. To a large extent, proton beams sidestep this problem. Unlike x-rays and gamma-rays, which penetrate the entire body and in doing so deliver potentially damaging radiation to healthy tissues downstream of the tumor location, when proton beams reach their target they abruptly stop, completely avoiding the downstream radiation exposure problem. Electron beams also stop after reaching a specific depth—although not nearly as abruptly as proton beams—but they also spread out laterally, whereas proton beams have exquisite aiming accuracy, both in depth and in lack of lateral spread, so enormously reducing radiation exposure of healthy tissues.
About two-thirds of the naturally occurring background radiation dose we are exposed to on a daily basis consists of alpha particles. Naturally occurring alpha particles are produced by radon gas (radon gas is the decay product or “daughter” of radium which resides in the superficial layers of our planet) that we absorb into our lungs with every breath we take. Radon gas, which is highly radioactive, oozes out of the ground and building materials mixing with surrounding air, so breathing it into our lungs is unavoidable. Alpha particles have a very short range in tissue—just a few cell diameters—so when inside the lungs they dump all their energy in the very thin layer of epithelial cells lining the lungs. The large mass, high electrical charge, exceptional ability to produce un-repairable biological damage, and the short-range of alpha particles produces more “bang for the buck” in terms of radiation damage than any other type of radiation.
How Can Radiation Cause Cancer?
Now that we’ve reviewed the nature of the radiations that we are exposed to, let’s think about how these radiations could cause cancer. Cancer, as far as we know, is the result of genetic mis-programming caused by mutations in our DNA. Such harmful mutations can occur naturally due to random processes, or they can be caused by external environmental factors such as chemicals or radiation.
Epidemiological Evidence for Radiation Risk to Humans
The evidence that we have on the relationship between human radiation exposure and cancer comes from man-made radiation sources to which humans have been inadvertently exposed. These include the Hiroshima and Nagasaki atomic bombs, irradiation of the spine that many decades ago was a standard treatment for a congenital disease called ankylosing spondylitis, and irradiation of the female breasts in tuberculosis sanatoria, mainly in Massachusetts and Canada, where a standard therapeutic approach was to deflate and re-inflate the lungs under x-ray fluoroscopic guidance. The theory at the time was that this maneuver would deprive the tuberculosis-causing microorganism of oxygen; but today we know that the responsible microorganism is anaerobic, i.e., does not require oxygen to survive and multiply. The x-ray fluoroscopic equipment in those early days produced hundreds of times more radiation dose to patients than modern fluoroscopic equipment, and because these patients received lung deflation and re-inflation under fluoroscopic guidance on a monthly basis, resulted in massive amounts of radiation dose being delivered to the breasts which, in turn, resulted in a measurably increased rate of breast cancer.
Derivation of Radiation Risk Models from Historical Radiation Effects Data
Statisticians working for U.S. and European organizations such as the NCRP, ICRP, ICRU, and the ABCC got hold of the above-mentioned data and drew a straight line, originating at zero radiation (where, presumably, zero additional cancers were caused) and passing through the average of the very scattered data points relating radiation dose to the incidence of cancer. This straight line is referred to as the “linear no-threshold radiation effects model”, or the “LNT model”. Since there are no actual data points at the relatively low radiation doses that are characteristic of natural background radiation and diagnostic x-ray doses, the LNT model is only a theoretical predictor of what additional cancer cases might be expected at these lower radiation levels. The gradient of this LNT line provides the only relationship we have linking radiation dose to cancer incidence and cancer death. For example, the number on the right in the table below shows the slope of the LNT line relating additional (excess) cancer deaths to radiation dose, in units of lifetime excess cancer deaths per 10,000 members of the general public exposed one time to 1 rem (1 Sievert in modern units) of radiation; we will not differentiate between the units of rem and rad or Gray and Sievert for the purposes of this discussion.
*Exposure received only after age 18 years. Data are weighted averages; i.e., the older you are at the time of the single exposure, the lower your risk, since you have less years left for the effect to express itself. The actual risk values change a little from report to report but are basically as shown.
This means that if 10,000 members of the general public were exposed to 1 rem of radiation dose, then within the remaining lifetimes of these individuals, 5 would contract fatal cancers statistically caused by that 1 rem of radiation dose. Now one can argue that those 5 cases of fatal cancer would equally likely to have been caused by natural background radiation or by non-radiation carcinogens such as chemicals. This is a totally logical assumption, and is the reason why it is very difficult to establish in tort law that a certain radiation dose caused a specific individual to die of cancer.
However, one can advance an epidemiological argument that each of the people exposed to 1 rem of radiation would have a probability of contracting a fatal cancer from that 1 rem of radiation dose that is (1 X 5 / 10,000) x 100, or 0.05%. Now the natural fatal cancer rate among the human population in the U.S. and Europe is approximately 20%, which means that 1 rem of additional radiation dose raises that probability to 20 + 0.05, or 20.05%. Expressed from that perspective, 1 rem hardly seems like a dose to be enormously concerned about.
Computation of Risk Estimates Using the Standard LNT Radiation Risk Model
The final piece of the puzzle we need to consider is the actual radiation doses produced by various sources of radiation that we can plug into the LNT equation and assess the corresponding risk.
Consider two illustrative cases
1) The doses from diagnostic radiology procedures that use x-rays range very roughly from 1 milli-rem (for a standard chest x-ray) to 1,000 milli-rem (for a typical CT scan); or, in modern units, 10 micro-Sievert to 10 milli-Sievert. Using the LNT gradient parameter “5.0” from the table above, for 10,000 exposed people, this dose range corresponds to between (5 x 0.001) and (5 x 1), or 0.005 to 5 additional fatal cancers per each x-ray procedure per 10,000 exposed people. Expressed as a percentage probability for a single individual, this corresponds to [0.005 x 100] / [10,000] to [5 x 100] / [10,000] = 0.00005% to 0.05%.
2) If the radiation dose received by 10,000 members of the general public were only due to one year’s worth of natural background radiation (which averages around the U.S. to roughly 300 mrem/year; or 3 milli-Sieverts/year), the number of additional fatal cancers caused in that population of 10,000 people would be 5 x 0.300, or 1.5 additional cancers/year for each year that each person was exposed to background radiation, or equivalently [5 X 0.3 X 100 / 10,000] = 0.015%. So since we are continuously exposed to background radiation, a 50 year-old individual exposed to natural background radiation from birth would have a [50 x 0.015%] = 0.75% likelihood of contracting a fatal cancer from natural background radiation.
One can conclude, therefore, that for an individual person the risk of fatal cancer induction due to radiation, either from natural background or from diagnostic x-ray procedures, is still very small compared to the baseline fatal cancer incidence rate of 20%.
Some Limitations of the LNT Cancer Risk Model
Despite what has been said, a number of mitigating factors need to be stressed.
1) The calculations presented here are based on the LNT cancer risk model. Even using the most extensive human radiation effects data we have at the present time, there are very large statistical uncertainties associated with such calculations—often larger than +100%; i.e., the probability of 0.05% cancer incidence due to a single CT scan, when expressed as a statistical range, would be zero - 0.1%.
2) The body has biological repair mechanisms that come into play when small-to-moderate levels of harmful DNA damage are produced. Therefore, the LNT risk model overestimates the fatal cancer probability at these low dose levels by quite a large amount, and this becomes larger and larger as the radiation dose level decreases and the repair mechanisms can do a more effective job. In fact, there is experimental evidence that at the radiation dose levels we are discussing here (i.e., 0 - 10 rem, or 0 – 0.1 Sievert), radiation and other carcinogens can sometimes cause a slight decrease in the fatal cancer incidence. This mechanism, which is not yet fully understood, is called “radiation hormesis”. Radiation hormesis is still in the closet among many practitioners of radiobiology, but more and more evidence to support radiation hormesis is gradually emerging, to the extent that sometime in the near future I believe the LNT model will be abandoned for the typical diagnostic radiology dose range of 0 - 10 rem (0 - 0.1 Sievert) in favor of the “hormetic dual-probability model”, which predicts a negative fatal cancer risk within this dose range. It has been a standard medical practice in Europe for centuries to expose patients to high concentrations of radon gas present in certain geological regions with the intent of strengthening their immune systems and hence making them better able to combat various diseases they may have.
3) The use of the LNT risk model has further limitations. A straight line through a very scattered array of data points often belies large statistical uncertainties in the slope of that line. Such uncertainties may hide fine structure of the radiation dose–fatal cancer incidence relationship that are not clearly evident. One such proven departure from the LNT relationship is the existence of a threshold for fatal cancer risk. In this modified model, a certain amount of radiation exposure can be tolerated without any detectable rise in the fatal cancer rate. Only after the radiation dose level exceeds a so-called dose threshold, does the fatal cancer rate start to climb in a more conventional straight-line fashion. This phenomenon of dose/effect threshold has been studied extensively, and despite the scattered nature of the experimental data some cancers have been found to conform better to the threshold version of the LNT model than to the non-threshold linear LNT model.
It is sometimes instructive to compare the lifetime risk of fatal cancer resulting from radiation exposure with the risks of death due to non-radiation-related factors. The table below shows such a comparison.
Approximate Lifetime Risk of Death Due to Receiving 1 Typical CT Scan vs. Various Non-Radiation Risk Factors.
1) The majority of radiation dose received by the general population is due to Diagnostic x-ray procedures and naturally occurring background radiation.
2) At the upper end of the dose range of single diagnostic radiology procedures (for example, a single abdominal or pelvic CT scan), the probability of induction of a fatal cancer is very roughly 0.05%.
3) With exposure only to natural background radiation, a 50 year-old individual (having accumulated approximately 300 orem/year of effective dose during each year of life) would have very roughly a 0.7% probability of contracting a fatal radiation-induced cancer.
4) The baseline lifetime fatal cancer risk in the population due to all causes is approximately 20%.
5) Our simple calculations are based on the LNT cancer risk model. Although this model pretty much utilizes the best radiation effects data we have at the present time, it has very large associated statistical uncertainties—often as large as +100%. Additionally, our calculations ignore the possible presence of a radiation effect threshold or of the protective effects of biological DNA repair mechanisms. The collective impact of these omissions in the basic LNT radiation risk model probably results in a large overestimate of the fatal cancer risk, especially at low diagnostic x-ray dose levels.
6) There is significant experimental evidence that at the radiation dose levels we are discussing here (i.e., below 10 rem, or 100 mSv), radiation can in fact cause a slight decrease in the cancer rate. This is termed radiation hormesis, and the associated dual-probability hormetic model that uses this concept will most likely soon replace the “conservative” LNT model on which all cancer inducing radiation effects in humans are traditionally based.
Pons and Fleischmann's simple fusion cell experiment for observing cold fusion. The platinum electrode absorbs the heavy hydrogen (deuterium) nuclei from the water bath and concentrates them so that their packing density in the platinum is enormously increased. This, together with the well-known "tunneling" effect, was believed to have produced fusion of the deuterium nuclei and raised the temperature of the water bath consistent with the existence of heavy-hydrogen fusion reactions.
In 1989, Dr. Stanley Pons (on the left in the photo above) and Dr. Martin Fleischmann (right), both at the University of Utah, shook the scientific world with the announcement that they had achieved cold fusion in their laboratory. At the time, Dr. Fleischmann was one of the world’s leading electrochemists. Cold fusion (if it in fact happens) occurs at “room” temperatures, as opposed to more classic, human-induced nuclear fusion that requires plasma temperatures of hundreds of millions of degrees Celsius, and presently is limited to occurring in special fusion devices such as TOKAMAKS or within fusion-based nuclear weapons.
Nuclear fusion is the name given to a nuclear process in which light nuclei, such as deuterium (hydrogen nuclei with twice the mass of normal hydrogen consisting of a neutron and a proton), are forced together to create a single nucleus of a heavier element. In this case the heavier element is helium, consisting of two protons and two neutrons. Such fusion of light elements (below a mass number of 55 (which is Iron-55) causes a spontaneous release of net binding energy: specifically, the sum of the binding energies of the two pre-fusion nuclei minus the binding energy of the single, heavier post-fusion nucleus.
Fusion—whether cold (i.e., at room temperature) or, more conventionally, hot (i.e., at temperature of hundreds of millions degrees C)-is only useful in terms of energy generation if it occurs between very light elements, i.e., of mass numbers generally less than 10. As the mass number of elements increases from 1 (hydrogen) to 55 (iron), fusion reactions having radiated energy as a byproduct can take place, but above mass numbers of around 10 their energy yield is greatly decreased. Fusion of two nuclei, each above a mass number of 55, theoretically produces no energy, so is not useful either for military or for civilian applications. Heavy nuclei (most commonly uranium-235 or plutonium-239) can only produce energy when they are broken apart—or fissioned.
Nuclear Fusion: Military and Civilian Uses
Normally, to initiate nuclear nuclear fusion reaction, two light nuclei must be forced together under conditions of extremely high mechanical pressure or extremely high temperature, since in the absence of these external factors, the mutual repulsion of nuclei due to their positively charged constituent protons prevents them from approaching each other closely enough to fuse.
Military Uses of Fusion: Thermonuclear Bomb
In the thermonuclear, or hydrogen bomb, the sum of the nuclear binding energies of two fusing nuclei of deuterium and tritium (which is hydrogen with one proton and two neutrons)--minus the binding energy of the resulting helium-4 nucleus, is released as a single pulse of 17.6 MeV ionizing energy, consisting of energetic protons, neutrons, and gamma rays.
Why are fusion bombs so much preferred to uranium or plutonium fission bombs? If expressed on the basis of bang-per-unit-mass-of-explosive-material, a fusion reaction produces about 10 times more “bang” than a fission reaction.
Civilian Uses of Fusion: Fusion Reactor
In a fusion reactor (a not-yet-functioning technology), deuterium gas is initially electrically heated to hundreds of millions of degrees C, becoming a “plasma”, a mixture of nuclei and completely detached electrons. A powerful magnetic field is applied to the plasma to confine it within a restricted volume, where the vibration of the deuterium nuclei, due to their high temperature, supplies sufficient energy to cause some of them to overcome their mutual electrical repulsion and fuse together. After the initial plasma has been created--or “ignited”, to sustain the plasma creation process, some of the energy released as part of the fusion reactions is used to sustain the high temperature of the deuterium plasma and to power the magnetic field. The remainder of the energy released is used to generate heat and ultimately steam and electricity.
Nuclear Fusion Reactions
Deuterium, sometimes called heavy hydrogen, is symbolized as H2 or D. Each nucleus consists of one neutron (n) and one proton (p). He4* is the symbol for Helium-4 (sometimes referred to as an alpha particle) and the asterisk indicates when it is in an energetically excited state.
Deuterium-deuterium (D-D) fusion is a two‐step process and can be summarized as follows:
Step 1: D-D Fusion event occurs
D + D > He4* He4* is created in a highly excited state
Step 2: He4* immediately de-excites via three possible pathways
1) He4* > n + He3 + 3.27 MeV released as kinetic energy of n and He3 [50% likely]
2) He4* > p + H3 + 4.02 MeV released as kinetic energy of p and H3 [50% likely]
3) He4 * > He4 + 4.02 MeV released as kinetic energy of He4 [0.0001% likely]
Note: no neutrons or tritium are produced in pathway 3 (see following comments)
So, total energy released is [50% x 3.27]+[50% x 4.02]+[0.0001% x 4.02] = 3.64 MeV
But only pathways 1) and 2) make significant contribution to D-D fusion energy release, since pathway 3) occurs only 1 in 1,000,000 times.
Controversies Around Pons & Fleischmann’s Cold Fusion Experiment
Pons and Fleischmann claimed that they had achieved cold fusion because of an observed rise in temperature of the heavy water bath used in their experiment (see next section for a brief description of the experimental setup). However, skeptics of these experiments have pointed out that if cold fusion had occurred, in addition to a rise in temperature of the heavy water bath, two other observations were necessary:
First, through pathway 1) described above, the fusion reaction produces energetic neutrons. After the neutrons have been moderated (i.e., slowed down) by the small number of light hydrogen nuclei remaining in the heavy water bath, they are strongly absorbed by those same light hydrogen nuclei in nuclear reactions that produce 2.2 MeV gamma-rays. But nowhere close to the required number of 2.2 MeV gamma-rays has ever been measured in any cold fusion experiment.
Second, through pathway 2) described above, the fusion reaction produces H3, or tritium nuclei. Now tritium is radioactive so its presence should be very easily measurable using fairly basic nuclear instrumentation. But once again, no tritium has ever been measured in the heavy water bath in any cold fusion experiment.
Supporters of cold fusion, however, have defensive counter-arguments up their sleeve: Note, they point out, that if pathway 3) were the dominant one, significant amounts of neutrons or tritium would not be produced. But listen here, say the skeptics, the likelihood of pathway 3) being followed following a DD fusion event is only 1 in 1,000,000 compared to the other two pathways which are a million times more likely to occur. Aha, retort the cold fusion supporters, but what if, by some yet not understood mechanism unique to the room temperature fusion conditions, that third pathway did in fact become the dominant one; that would explain why very little tritium or 2.2 MeV gamma rays were observed.
The logic is sound, except there is no evidence to support pathway 3) being more likely in cold vs. hot fusion. As a peripheral issue, if the temperature rise of the heavy water bath in Pons and Fleischmann’s experiment is used to compute the number of neutrons that should have been produced if cold fusion were responsible for their production, Pons and Fleischmann would be “ex-Pons and Fleischmann”, since the neutrons produced would have killed them.
Experimental Setup of a Cold Fusion Experiment
The experimental setup for observing cold fusion is remarkably straightforward. One starts with a water bath filled with heavy water. One places two metallic electrodes in the water bath, one of them made of platinum. The two electrodes are connected to a DC voltage source, the positive side of the circuit to the platinum electrode. Finally, one inserts a thermometer into the water bath to measure any rise in temperature. The experimental setup is shown in the second picture above.
Platinum has the property of absorbing hydrogen atoms from water. When the hydrogen atoms have been absorbed by platinum they become much more densely packed than they would be in water or other organic substances. The likelihood that two particles combine is not only determined by their mutual proximity (enhanced by the platinum), but also by the “tunneling effect”. Tunneling occurs in situations where, for various physical reasons, two particles are prohibited from combining; such as, for example, two positively-charged hydrogen nuclei. But, due to the statistical nature of nuclear reactions, there is some finite likelihood, albeit exceedingly small, that the particles will in fact combine. This effect is referred to as tunneling. Tunneling is also the nuclear mechanism by which alpha particles are emitted from some very heavy nuclei such as uranium-235 or radon-222, despite such emission being prohibited by the balance of nuclear forces.
Within the platinum electrode, even if the tunneling likelihood for DD fusion is extremely small, due to the sheer enormous number of hydrogen atoms present, the total number of DD reactions might, in absolute numbers, be substantial and could theoretically cause a measurable rise in the temperature of the water bath. That is the explanation provided by many supporters of cold fusion of how DD fusion can occur at room temperature. And, if under those innovative and poorly understood experimental conditions, pathway 3) became the dominant one, only very small amounts of 2.2 MeV gamma rays or tritium would be produced.
If a rise in the temperature of the water bath in the cold fusion experiment were agreed as being the only marker of successful cold fusion, even that phenomenon has not been reliably demonstrated.
Following the cold fusion announcement, Dr. Pons became the chairman of the chemistry department at the University of Utah. Dr. Fleischmann passed away in 2012.
Candace Gilet has suggested some rules and principles for the pursuit of honest science, which are listed below:
1) The scientific community is responsible for checking the work of community members. Through the scrutiny of this community, science corrects itself.
2) Scientists actively seek evidence to test their ideas. They strive to describe and perform the tests that would prove their ideas wrong and/or allow others to do so.
3) Scientists take into account all the available evidence when deciding whether to accept an idea or not, even if that means giving up a cherished hypothesis.
4) Science relies on a balance between skepticism and openness to new ideas.
5) Scientists often verify surprising results by attempting to replicate the experiment.
6) In science, discoveries and ideas must be verified with multiple lines of evidence.
7) Data require analysis and interpretation. Different scientists may interpret the same data in different ways.
Unfortunately, Pons and Fleischmann eschewed many of the above before announcing their discovery of cold fusion.
Despite everything said, there is a renewed research effort at many top universities, and a number of positive results are being reported. The current experimental data on cold fusion is, however, still "in the closet", a condition shared by a few other scientific research areas, such as radiation hormesis, where at low radiation dose levels, typically like those in diagnostic x-ray imaging, it has been demonstrated in a number of well designed experiments that cancer incidence in fact decreases.
It would indeed be refreshing if the doors of all scientific "closets" could be thrown open, and the controversial data that emerge be analyzed objectively by respected scientists. Science would progress more rapidly to the benefit of us all.
If during such a 21st century scientific enlightenment, cold fusion was concluded to be a reality, the implications for mankind would be staggering. As mentioned in the introduction, using the top 6 inches of water in Lake Superior as hydrogen fuel for cold fusion, the energy needs of our entire planet could be met for 100 years, and this energy would be much cleaner than what is currently available with coal, natural gas, or nuclear energy generation.
The physicist Carl Sagan said it well: “Extraordinary claims require extraordinary proof.” So far, no compelling proof of the existence of cold fusion has emerged, but we should all keep open minds.
Some time ago, a number of stories surfaced in the media about the U.S. military’s use of depleted uranium as a component of defensive armor plating and armor piercing tank shells, and the potential health risks to civilian and military populations from the use of this technology.
What is Depleted Uranium?
Depleted uranium is natural uranium, as it is dug out of the ground, but with most of the radioactive uranium-235 isotope removed. The production process is, in fact, exactly the opposite of enriching natural uranium for use as fuel in nuclear reactors or weapons-grade uranium for military applications. Natural uranium is about 99% uranium-238 and 0.7% uranium-235; there are a few other isotopes of uranium present in very small quantities, but these are not relevant to the present discussion. The depletion process further reduces the small amount of uranium-235 present, down to a level of around 0.2%. The resulting material is most commonly called depleted uranium, but is also referred to as DU, Q-metal, or D-3.
Uses of Depleted Uranium
DU is used in a number of civilian applications. Alloyed with other metals, such as tungsten or molybdenum, it has an extremely high physical density, nearly double that of lead, as well as having much greater hardness characteristics, almost complete lack of “flow” if installed without external support, and exceptional x- and gamma-ray shielding properties. Civilian uses of DU include counterweights in aircraft and ships’ keels, internal radiation shielding for radiation therapy accelerators, shielding for high x-ray energy industrial radiography equipment, and shielding of containers used to transport radioactive materials.
Military uses of DU alloys, often called Staballoys, include armor-piercing projectiles and defensive armor plating. DU has the property of being mechanically self-sharpening and chemically self-incendiary when experiencing high mechanical impact pressures, making it a very useful component of shell and missile design. Some late models of the U.S. Abrams tank built after 1998 have DU reinforcement as part of the armor plating in the front of the hull and the front of the turret. DU is also used in some thermonuclear weapons to enhance their destructive effect with a hail of small, extremely heavy, and hard projectiles--somewhat similar in concept to conventional explosive cluster bombs.
Health Effects of Depleted Uranium
The use of DU for non-thermonuclear weapon military applications is controversial because of concerns about its potential long-term health effects to civilian populations and military personnel. Kidney, brain, liver, heart, and numerous other organs can be damaged by internal exposure to DU aerosol, because in addition to being very slightly radioactive due to the very small residual amount of uranium-235 present, DU is chemically extremely toxic. DU aerosol airborne powder or dust produced following impact and combustion of DU-enhanced munitions can contaminate wide areas around a target impact site and then be inhaled by civilians and military personnel. In 2003, during a three-week period of the war in Iraq, 1,000-2,000 metric tons of DU munitions were used by U.S. forces, mostly in urban environments. It should be emphasized that DU used for civilian applications poses no health risks whatsoever.
Controversies Surrounding Toxicity of Depleted Uranium
The toxicity of DU is, nevertheless, still a point of controversy. Studies using cultured cells and laboratory rodents indicate the possibility of increased rates of leukemia, as well as genetic, reproductive, and neurological disorders from chronic DU exposure. A 2005 epidemiology review concluded, “The human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU”.
In total contrast, however, the World Health Organization has stated that, “No consistent risk of reproductive, developmental, or carcinogenic effects have been reported in humans [following exposure to DU]”. But many scientists have called into question the objectivity of the WHO report. It should be mentioned again that in civilian uses of DU, no known risks are present.
Possible Alternatives to Depleted Uranium for Military Applications
Even if a replacement could be found for DU in military applications, health concerns would not be alleviated, since possible replacement materials for DU, such as tungsten-cobalt or tungsten-nickel-cobalt alloys, possess extremely carcinogenic properties themselves; far in excess, in fact, of those claimed for DU. So there appears to be little practical alternative to the continued military use of DU.
The European Parliament has repeatedly passed resolutions requesting an immediate moratorium on the further use of DU munitions, but France and Britain, the only EU states that are permanent members of the UN Security Council, have consistently rejected calls for such a ban, maintaining that its use continues to be legal and that the putative health risks in humans are “still unsubstantiated”. Unfortunately, the use of DU seems to be a critically important component of U.S. military technology, and it is unlikely that its use will be curtailed in the foreseeable future. We can only hope that a movement toward global peace will eventually retire the use of depleted uranium in military applications.
Depleted Uranium (DU) is composed of 99.8% uranium-238 and 0.2% uranium-235. Civilian uses of DU include counterweights in aircraft and ships’ keels, internal radiation shielding for radiation therapy accelerators, shielding for high x-ray energy industrial radiography equipment, and radiation shielding of containers used to transport radioactive materials. Military uses of DU alloys, often called Staballoys, include defensive armor plating and armor-piercing projectiles. Regarding health risks of DU, the World Health Organization has stated that, “No consistent risk of reproductive, developmental, or carcinogenic effects have been reported in humans [following exposure to DU]”. But many scientists have called into question the objectivity of this report. The European Parliament has repeatedly passed resolutions requesting an immediate moratorium on the further use of DU munitions, but France and Britain, the only EU states that are permanent members of the UN Security Council, have consistently rejected calls for such a ban.
Since their discovery in 1896, x-rays and gamma rays have been the cornerstones of diagnostic radiology and radiation oncology. Within the last decade, however, new approaches to radiation oncology with x-rays and gamma-rays have been developed that go even further toward increasing the radiation dose delivered to tumor while reducing the radiation dose delivered to surrounding healthy tissues and organs—which is the primary technical goal of radiotherapy.
Intensity Modulated Radiotherapy (IMRT)
An important development in radiation oncology about a decade ago is called Intensity Modulated Radiotherapy, or IMRT. With this technology, the x-ray beam during a treatment is not of constant intensity, as it was in older technologies, but varies based on the changing irradiation geometry during a treatment delivery. In conjunction with a more established technology called the variable leaf collimator, IMRT is often able to produce better dose distributions within the patient’s body than pre-IMRT technologies. Not all cancer treatments are improved in quality using IMRT, but most are.
Image Guided Radiotherapy (IGRT)
An original deficiency of IMRT was that it relied on the tumor remaining in the same anatomical position during a treatment, as well as maintaining the same location in the body from one treatment fraction to the next. In reality, however, many tumors tend to move around. For example, lung tumors can shift positions with every breath, while tumors of the colon can change position from day-to-day due to migrations of bowel gas. In both cases, this can make “aiming” the radiation treatment very uncertain.
A solution to this problem was developed about a 15 years ago and is called Image Guided Radiotherapy, or IGRT. CT scans of the patient are taken by specialized CT scanners that produce movie-like scans that record, for example, the respiratory movements of lung tumors. Various devices are placed on the patient’s chest to monitor respiratory movement using video cameras while the CT scan is in progress. The same devices are used during actual treatment, and a correlation is established between the instantaneous position of the tumor and the phase of the respiration cycle, which enables IMRT enabled therapy machines to irradiate the tumor only when it is in a predetermined known position.
On Board Imaging
A related development is CT-like imaging devices that are integrated into the treatment machines themselves. Known as on board imaging, or OBI, these devices use the normal rotation around the patient of modern linear accelerators to collect x-ray transmission measurements through the patient that are immediately reconstructed into CT-like images. OBI can detect, for example, shifts in anatomical tumor position from day-to-day as well as during a treatment. With this additional information, tumor location corrections can be applied from one treatment to the next as well as more recently, using specially adapted treatment machines, during the course of each individual treatment. Not all radiotherapy centers yet have OBI capabilities, but many larger facilities and academic centers do.
A completely different approach to radiotherapy, available for many years but in very few treatment centers, is called Proton Therapy. Protons are nuclear particles that can be accelerated to very high energies by machines called cyclotrons or synchrotrons. When a beam of such high-energy protons enters the patient’s body, it delivers its energy quite uniformly within a specifically chosen range of depths that is made to vary as the proton beam rotates around the patient. Unlike beams of x-rays that pass all the way through the body and deliver unwanted dose to healthy tissues beyond the depth of the tumor, proton beams stop abruptly at specified depths; this means that no dose at all is delivered beyond the deepest location of the tumor and, therefore, downstream healthy tissues and organs are protected. Consequently, for many applications in radiotherapy, proton beams are able to deliver radiation with more precise conformation to tumor volume and with less injury to healthy tissues and organs than x-rays. For this reason, protons are an excellent choice for a specific class of tumors, where unwanted radiation outside the tumor volume can lead to especially serious side effects.
Since proton therapy involves very expensive facilities (costing typically $100,000,000 to $150,000,000), insurance companies only cover this treatment for a limited number of cancers where it has been clearly demonstrated that protons are more effective—or at least produce fewer side effects than x-rays. However, with time the cost of proton facilities will no doubt decrease, which should expand the range of applications for which insurance coverage is available. Examples of cancers where protons have been found to be superior to x-rays are tumors of the spinal cord, childhood brain tumors, and prostate tumors.
Today, there are still not many proton treatment facilities in existence: approximately 14 centers in the U.S. with 12 more in the planning or construction stage, with about 49 centers mainly in Europe and Japan.
Using similar principles to proton therapy, one of the latest approaches to particle therapy uses carbon ions in place of protons. Carbon ions, approximately 12 times heavier than protons, require much higher energies and even more costly accelerators than protons, but in return produce even more conformal dose distributions than protons and possess radiobiological characteristics that make them especially suited to treating certain types of highly resistant cancers. Typically, the accuracy of heavy-ion beam delivery is better than 1 mm. Carbon ions also produce biological damage that is virtually unrepairable, so tumors and normal tissues are able to recover much less from this kind of radiation than from x-rays or protons. Because the criteria for beam delivery with heavy-ions are even higher with heavy-ions than with protons (due to the potentially devastating harm the heavy-ions can cause if misdirected), it is considered almost mandatory to mount the beam delivery systems on isocentrically mounted rotating gantries, so that the patient does not move after setup or during changes in beam direction, which would greatly reduce the accuracy of beam delivery. However, because the accelerating energies for heavy-ion beams are very much higher than for protons, the physical size of the gantries needed is enormous. At the present time there is only one clinically operational heavy-ion 360 degree rotating gantry facility in the world: in Heidelberg, Germany. The gantry in this facility weighs 670 tons and the accuracy of beam delivery is stated to be sub-millimeter. The picture at the top of this article shows the massive dimensions of the Heidelberg heavy-ion gantry.
An obvious question arises as to whether even heavier ions could be used which would provide still better radiobiological characteristics and further improved beam delivery accuracy. However, for ions heavier than carbon, these two characteristics in fact rapidly deteriorate, because heavier ions break up into lighter ones as they penetrate through tissue, thereby reducing beam delivery accuracy and the quality of the beam’s radiobiological characteristics.
So far, approximately 13,000 patients have been treated with carbon-ion beams in Japan, Germany, Italy, and China. But the U.S., despite having pioneered heavy-ion treatment at the Lawrence Berkeley National Laboratory in clinical trials that ran from 1975 through 1992, still lacks a clinical treatment capability.
The enormously high cost of heavy ion therapy, many times that for proton therapy, sheds some doubt on whether heavy-ion facilities could ever exist on patient income alone without the large subsidies from government funding that they presently enjoy.
However, from a financial viewpoint, treatment with heavy-ions has one mitigating advantage over treatments with x-rays or even protons. Because of the propensity of heavy–ion beams to cause essentially unrepairable damage, there is less benefit to fractionation than with other forms of radiation. In fact, for some cancers, heavy-ion treatments are delivered in 4-6 fractions rather than the 30-40 fractions that would be used for treating the same cancers with x-rays or protons. This proportionally reduces the cost of the heavy-ion treatments by a factor of approximately 4.
Staff at the National Cancer Institute have estimated that a total of 60 heavy-ion treatment facilities would meet the clinical need for heavy-ion treatment worldwide for the types of cancers for which heavy-ions have been shown to be beneficial.
Fast Neutron Therapy
Fast neutron therapy uses high-energy neutrons in place of x-rays for treating certain cancers for which the dependence of tumor regrowth on the presence of oxygen is a problem for local disease control.
Therapeutic fast neutrons have energies of 50-70 MeV. At these energies, fast neutron beams have penetration in tissue that is similar to 6-10 MeV (i.e., intermediate energy x-ray beams). Fast neutrons interact with tissue primarily by causing recoil of the tissue elements hydrogen, carbon, oxygen, and nitrogen. The radiobiological characteristics of these recoiling ions (with the possible exception of hydrogen) are very well suited for treating certain cancers that depend on the presence of oxygen to survive and grow.
The parameter that defines how much less dose needs to be delivered with fast neutrons vs. x-rays to produce the same effect on tumor is called the relative biological effectiveness, or RBE. But, as fast neutron dose decreases, RBE increases. This leads to potential complications at normal-tissue/tumor boundaries, where a sharp dose gradient exists and one would expect normal tissues to receive significantly less dose than tumor. However, since the physical doses to normal tissues around this boundary are significantly lower, the RBE is significantly higher; which led to unexpected normal tissue complications in the early fast neutron trials in the U.S. Similarly, as the number of fast neutron fractions for a treatment increases, dose-per-fraction decreases and, once again, fast neutron RBE increases.
This latter phenomenon led to the earliest fast neutron trials in the U.S. at the Lawrence Berkley Laboratories in the 1940s being an unmitigated disaster, since fast neutron RBE was determined in in vitro cell cultures, and no allowance for increased RBE was made when fractionated treatments were delivered to the first fast neutron patients.
Fast neutron therapy has been administered to about 30,000 patients in Germany, Russia, South Africa, and the United States. In the U.S., four treatment centers exist in Seattle, Washington, Detroit, Michigan and Batavia, Illinois, although currently only three are actively treating patients.
The efficacy of fast neutron treatments has been convincingly demonstrated for many forms of cancer; but specifically head-and-neck cancers, which often depend on the presence of oxygen to grow, have been excellent candidates for fast neutron therapy.
So why has fast neutron therapy not caught on? The reasons for this are related to technical and financial issues. From a financial viewpoint, as with current x-ray and proton therapy treatments, the fast neutron treatment heads—at least in the U.S. facilities--are mounted on isocentrically rotating gantries. Since the production of fast neutrons for therapy requires the acceleration of either protons or deuterons to high energies, the cost of isocentrically mounted fast neutron treatment heads is equal to the cost of isocentrically mounted proton treatment heads plus the additional hardware necessary to create and shape the fast neutron beam. The resulting cost of an isocentrically mounted fast neutron treatment facility is significantly higher than the cost of a proton treatment facility.
From a technical viewpoint, there have been many enhancements to x-ray therapy over the past 20 years—as described in the earlier sections of this article; for example, IMRT, IGRT, on-board imaging, sophisticated treatment planning software, etc. Because fast neutron treatment facilities are still experimental and highly subsidized, many of the above developments in sophistication of treatment have not percolated down to the fast neutron facilities. Therefore, fast neutron facilities, other than enjoying rotating gantries, have not been able to take advantage of many of the technological treatment enhancements that have helped x-ray therapy.
A new type of x-ray radiotherapy was developed about 15 years ago called Tomotherapy. Tomotherapy machines look a bit like CT or MRI scanners. The patient enters a tunnel on a moveable couch and a built-in CT scanner ensures accurate positioning. The patient then moves slowly through the tunnel while many pencil-sized beams of x-rays are rapidly turned on and off in a carefully programmed sequence while the x-ray source rotates around the patient. For many types of cancers, Tomotherapy can provide a higher quality treatment than IMRT, and at times can almost match the precision of proton therapy.
An interesting innovation in radiotherapy has been the Cyberknife, a robotic radiotherapy system programmed to aim a moving pencil-like x-ray beam at the patient’s body from various directions (see 1st picture above). Like proton therapy, Cyberknife is most useful for treating small primary or metastatic tumors located within particularly sensitive normal tissues. The Cyberknife is usually integrated with an IGRT enabled CT scanner, so its beam can also be programmed to “follow” the movement of a tumor as a treatment progresses.
For small brain tumors, brain metastases (tumors from elsewhere in the body that have spread to the brain), and arteriovenous malformations (AVMs) the Gammaknife has proven itself a very effective form of radiotherapy. Introduced about 20 years ago, the Gammaknife uses 101 individual sources of the radioisotope cobalt-60 that send individual narrow pencil beams of gamma rays (equivalent to x-rays for this discussion) to crossover at a “focal point”. The patient’s head is then oriented and automatically moved so that this focal point is swept throughout the volume of the tumor to be treated, producing a very conformal dose distribution around one or more tumors. The gammaknife procedure requires that the patient have a stereotactic frame placed on his/her head, which involves four screws that penetrate through the scalp and partially into the skull, so that the MRI images initially taken for treatment planning can be spatially correlated with the gammaknife’s internal coordinate system. For this reason, the Gammaknife is designed to do single treatments rather than the 20-40 treatment fractions comprising most radiotherapy regimens. In addition to being an effective treatment for intracranial tumors and metastases, the gammaknife is frequently used to treat trigeminal neuralgia. This nerve disorder causes severe and disabling facial pain. By strategically placing a high-dose lesion on the trigeminal nerve where it exits the spinal cord, the pain signals can be be blocked. Gammaknife provides the highest level of precision necessary to effectively treat trigeminal nerve disorders. This procedure is more frequently being used as the first treatment for trigeminal neuralgia when medications fail to provide adequate pain relief.
Neutron Capture Therapy
The efficacy of radiotherapy in the future will markedly improve when techniques are developed to treat metastatic disease as effectively as local disease. One such emerging technology, still in its early clinical trials, is called neutron capture therapy (NCT), once dubbed “cellular surgery” because of its potential ability to irradiate individual tumor cells rather than visible macroscopic tumor volumes--as all other radiotherapy technologies discussed here are limited to doing. NCT is a very complex form of radiotherapy, combining the principles of chemotherapy (in that individual tumor cells are initially targeted by a chemical compound), and heavy-particle radiotherapy (with the radiobiological advantages brought by that form of therapy.
The patient is initially infused with a chemical compound that has been designed to selectively concentrate in tumor cells—both those within the primary tumor volume as well as the individual islands of cells near the peripheries of primary tumor volumes. Each molecule of the compound is labeled with a number of atoms of boron-10 (typically a “cage” of 12 boron-10 atoms). Boron-10 has an unusually high proclivity for absorbing low energy neutrons, immediately after which it splits into two charged particles: a lithium-7 ion and an alpha particle. These particles have a range in tissue of 4-10 µm, roughly equivalent to a typical tumor cell diameter. After the chemical compound has been given sufficient time to reach an optimal concentration ratio between tumor and normal cells, the local region of the body is irradiated with a specially designed “epithermal” neutron beam from a specially modified research nuclear reactor (in the U.S., research reactors at the Brookhaven National Laboratory and at the Massachusetts Institute of Technology were converted to deliver NCT therapy to patients, but are not active any more).
The epithermal neutrons entering the tissue (after losing most of their energy through collisions) are captured by the boron-10 nuclei, resulting in a cellular level charged particle distribution that mimics the distribution of the boron labeled compound. Consequently, much higher radiation doses can be delivered to tumor cells, both within the primary tumor volume as well as isolated tumor cells near the tumor’s periphery, than to neighboring normal cells.
There have been a number of clinical trials of NCT at some of the 10 or so sites in the world where such technologies exist. NCT has been shown to sometimes provide better therapy for certain tumors than any other radiotherapy available; for example, certain very difficult-to-treat head and neck cancers respond very well to NCT, probably because the radiobiological properties of NCT do not permit the presence of oxygen in tumors to support their regrowth. However, the complexity and very high cost of delivering NCT severely limit its continuing development and its future.
Important advances in radiotherapy and cancer imaging over the past 20 years or so have consistently increased the “therapeutic ratio”, i.e., the ratio of dose to tumor divided by dose to normal tissue; this, in turn, enables more dose to be delivered to tumor for the same level of complications in normal tissues and results in better local tumor control. This, combined with a deeper understanding of the biology of tumors, has further resulted in very significant improvements in local tumor control without concomitant increases in normal tissue complications.
We have discussed a number of advances in radiotherapy technology, including intensity modulated radiotherapy, image guided radiotherapy, on-board fluoroscopic imaging, proton therapy, heavy-ion therapy, fast neutron therapy, tomotherapy, cyberknife, gammaknife, and neutron capture therapy.
The next quantum jump in the efficacy of radiotherapy in the future will most likely be the ability to treat metastatic disease as effectively as we can now treat local disease. Neutron capture therapy promises to achieve this to a certain extent, but its cost will greatly inhibit its further development.
With many of the technologies discussed, the extremely high cost poses a dilemma for society: how much public funding is it reasonable to expend on treating a subset of cancers that respond particularly well to certain very costly new therapies? It is time, indeed, to bring in permanent federal and private sector support for ultra-expensive cancer treatments.
An asteroid designated 1462-Zamenhof is named after Ludwig Lazarus Zamenhof, the inventor of the artificial language Esperanto ("hope"), who happens to be the great-uncle of the author. This picture has absolutely nothing to do with dark matter, but is just an opening for the author to brag a little about his ancestor!
Astrophysicists have hypothesized the presence of so-called dark matter in the universe because of discrepancies between the calculated mass of large astronomical objects (galaxies, stars, etc.), as determined from their gravitational effects on the rotation of neighboring astronomical objects, vs. their mass calculated from the “luminous matter” they contain (primarily gas and dust).
One somewhat exotic theory suggests the existence of a hidden valley, a parallel world made of dark matter, having very little in common with matter we know, that can only interact with our visible universe through gravity! Many experiments to detect proposed dark matter particles through non-gravitational means are under way.
The Discovery of Dark Matter
The first scientist to postulate the presence of dark matter based upon reliable scientific evidence was Vera Rubin in the 1960s and 1970s. Rubin calculated the mass of galaxies based on their observed rotations, and also based on the luminous matter they contained. There were significant inconsistencies in the masses calculated by these two approaches, which led Rubin to calculate and hypothesize a yet unidentified presence of additional mass in the universe that would enable the results obtained by these two calculation approaches to agree. This “excess” mass was termed dark matter and is now believed to constitute approximately 85% of the mass of the universe.
Uncertainties about Dark Matter Predictions
As with any theory, there are some observations that do not agree with the predictions of dark matter as described above. For example, astrophysicists looking in the volume of space around our own sun have been unable to predict the excess gravitational force that the presence of dark matter should produce. Although the existence of dark matter is now generally accepted by the mainstream scientific community, some alternative theories of gravity have been proposed which try to account for the anomalous observations without requiring additional matter. However, these theories cannot account for the gravitational observations in galaxy clusters.
Related Concepts – Dark Energy
There is also still unverified energy present throughout the universe called dark energy. Dark energy is believed to be the unseen influence in the universe that is causing the expansion of the universe to accelerate with time. Without the postulation of dark energy, astrophysical calculations in fact suggest that there is a reduction in the rate at which the universe is expanding with time; but astrophysical observations all indicate an acceleration of expansion with time, so dark energy is the additional energy that must be invoked in the model of the universe to support the physics observations. Adding the mass-equivalent of the predicted dark energy to the calculated mass of dark matter, it is estimated that over 95% of the mass of the universe is a combination of dark matter and the mass-equivalent of dark energy.
The Composition of Dark Matter
Dark matter is believed to be composed of weakly interacting massive particles called neutralinos that interact only through gravity and the nuclear weak force. Alternative explanations have been proposed, but there is not yet sufficient experimental evidence to determine whether any of them is correct.
Dark matter is a form of invisible mass that pervades the universe. It is believed to be composed of neutralinos, yet undetected elementary particles predicted by supersymmetry theory. Approximately 85% of the mass of the universe is believed to constitute dark matter, an inference drawn from the discrepancy between gravitational observations vs. gravitational effects calculated on the basis of visible matter. There is also predicted but still undetected energy present throughout the universe called dark energy, which is an additional energy postulated to exist in the universe to resolve the disagreement between astrophysical calculations, which suggest a universe that is expanding but at a slower rate with time and physical observations that suggest the expansion to be accelerating with time. Adding the mass-equivalent of this predicted dark energy to the mass of dark matter in the universe, it is estimated that over 95% of the mass-equivalent of the universe is due to the sum of dark matter and dark energy (see the second picture above).
Dark matter plays a central role in modeling of cosmic structure formation and galaxy formation and evolution and has measurable effects on the anisotropies observed in our cosmic microwave background. All these lines of evidence suggest that galaxies, clusters of galaxies, and the universe as a whole contain far more matter than is inferred through “visible” means. Further knowledge of dark matter and dark energy would add greatly to the understanding of the creation of elementary particles in the universe within a second or less following the big bang.
Hmm, seems like a giant astrophysics error that physicists are now trying to reconcile. Good thing the humanities are not that uncertain ...
Diagrammatic depiction of the principles behind carbon dating. Carbon-14 (radioactive, with half-life of 5,730 years) is created in the atmosphere by nuclear interaction of cosmic rays and atmospheric nitrogen. Plants absorb the carbon-14 via carbon dioxide and people and animals absorb carbon-14 by consuming the plants. An equilibrium between carbon-14 in the atmosphere and in living organisms makes the carbon-14 / carbon-12 (non-radioactive) ratio remain constant. But when a living organism dies, no more carbon-14 is absorbed, so as the carbon-14 decays, the carbon-14 / carbon-12 ratio decreases with time. Measuring this ratio precisely provides the age of the organism at death
The general technique of radiometric dating was first published in 1907 by Bertram Boltwood, and is now the principal source of information about the absolute age of rocks and other geological entities. It can be used to date a wide range of natural and man-made materials including the age of the Earth itself.
Carbon Dating of Organic Materials
A specific subgroup of radiometric dating is called Carbon Dating, and is generally used to date plant and animal remains. Carbon dating was invented by Willard Libby in the late 1940s, and soon after became a standard tool for archaeologists and anthropologists.
The most prevalent radionuclide found in the tissues of living plants and animals is carbon-14 (*C-14) (the asterisk is there as a reminder that the carbon isotope is radioactive). *C-14 is produced by the interaction of neutrons (a major component of cosmic rays) with nitrogen in our atmosphere. The *C-14 then combines with oxygen in the atmosphere to produce *C-14 carbon dioxide, which eventually makes its way into living plants. The *C-14 then enters the food chain and enters the bodies of living animals. The *C-14 in living plants and animals gradually decreases in concentration with time due to radioactive decay, but it is continually replenished by the intake of additional *C-14 through plant respiration and the food chain. Eventually, a constant equilibrium concentration of *C-14 develops in the living plants and animals, in parallel with a corresponding equilibrium concentration of the non-radioactive isotope of carbon, C-12. Both C-12 and *C-14 are regulated to equilibrium through their absorption (as C-12 and *C-14 carbon dioxide) and their subsequent excretion as sugars. The difference here is that the C-12 does not radioactively decay while *C-14 does, so the equilibrium achieved is at a lower plant concentration for *C-14 than for C-12. In animals, carbon is absorbed via plant food and excreted as carbon dioxide in breath. The net result of the above processes is that the ratio [*C-14 / C-12] in living plants and animals also equilibrates to a constant value.
When the plant or animal dies, the absorption of C-12 and *C-14 abruptly ceases. Consequently, while the C-12 that was present at the time of death remains at a fixed concentration, the *C-14 that was present at the time of death gradually decreases in concentration through radioactive decay with a half-life of 28,650 years. Therefore, the ratio of [*C-14 / C-12] also decreases with time after death of the plant or animal. After about 5 half-lives of *C-14 (i.e., about 716,250 years), this ratio has fallen to almost zero (5 half-lives is a rule-of-thumb for estimating the time required for a radionuclide to decay to approximately 1% of its initial activity). By accurately measuring the ratio of [*C-14 / C-12] within this time window using sensitive nuclear analytic instruments such as mass spectrometers, the time since death of a living organism can be determined quite accurately over a time scale of approximately 60,000 years.
The Influence of Nuclear Weapons Testing on Carbon Dating
Carbon dating was more accurate prior to the 1950s than it is today. That is because after the 1950s, above ground testing of nuclear weapons by China, the U.S., and the Soviet Union altered the previously constant [*C-14 / C-12] ratio in our atmosphere, partially invalidating the theory of carbon dating determinations. Although approximate corrections can be made, the ultimate accuracy of carbon dating became significantly lower after the 1950s.
Radiometric Dating of Inanimate Materials Using Other Radionuclides
Other forms of radiometric dating, relying on the absorption of radionuclides other than C-14 by living organisms or inanimate materials, provides the ability to date animal and plant materials over much longer time scales than is possible with carbon dating, and is used to date the inorganic mineral component in animal bones as well as minerals and rocks.
The principle of these other radiometric dating techniques is based on measuring the decay rate of one specific radionuclide relative to an assumed fixed concentration of a second radionuclide in the material of interest that has a much longer half-life. Potassium-argon and uranium-lead dating are the most common examples of this method of radiometric dating.
Potassium–argon (abbreviated K–Ar) dating is used most frequently in geochronology and archaeology. This method is based on a very precise measurement of the conversion rate of a radioactive isotope of potassium *K-40 due to radioactive decay into the stable gas Argon-40. Potassium contains a trace quantity of the naturally occurring radionuclide *K-40 and is a common element found in many materials such as micas, clays, and minerals. In these materials, argon-40, the gaseous stable decay product of *K-40, is able to escape these materials when they are in a molten or uncrystalized state, but is prevented from escaping after they have solidified or recrystallized, and therefore starts to accumulate. The time since a rock sample solidified or recrystallized is obtained by accurately measuring the ratio of the Ar-40 accumulated to the amount of *K-40 remaining in the rock. The extremely long half-life of *K-40 (1.26 billion years) allows this method of radiometric dating to be used to calculate the absolute age of samples of rock from a few thousand years to a few billion years, as well as the age of the earth itself.
Other radionuclide pairs used in this form of radiometric dating are uranium-lead, rubidium-strontium, and uranium-thorium. The choice of radionuclides depends on the chemical form of the materials to be dated, on the age-range anticipated, and on the accuracy required.
Radiometric dating has contributed immeasurably to our understanding of geological history and has contributed greatly to anthropological research. However, the technique requires exquisitely sensitive and accurate nuclear measurement equipment which usually only dedicated laboratories possess.
Carbon dating is often used when the age of organic matter needs to be measured over a timescale of about 60,000 years. The accuracy of carbon dating was compromised after the 1950s when the C-14/C-12 ratio in our atmosphere was changed due to nuclear weapon testing by the U.S., Russia, and China.
For measuring longer intervals of time in inorganic materials—rocks, minerals, or the earth itself--the decay rates of one radionuclide relative to an assumed fixed concentration of a second radionuclide having a much longer half-life are measured. Potassium-argon and uranium-lead dating are the most common examples of this method of dating. The former is used most frequently in geochronology and archaeology.
In 2009, the journal Archives of Internal Medicine carried a report that caused quite a stir in the media and among the medical community. The main conclusion of the paper was that the 57 million CT (computer assisted tomography) scans done in the U.S., in 2007, were expected to produce on a statistically predictive basis 14,500 future cancer deaths in the patient population scanned. How reliable are the data presented in this report and how logical are its conclusions?
Radiation Epidemiology of CT Scanning
The statement that 14,500 future cancer deaths may result from 57 million CT scans performed in one year in the U.S. at first seems terribly frightening, unless you recognize that diagnostic uses of radiation aren’t the only cause of cancer. There is a “baseline” cancer rate, due to various other factors, such as environmental carcinogens, man-made carcinogens (food, drugs, etc.), and natural background radiation.
This average baseline fatal cancer-rate in the U.S. and Europe is approximately 20%; i.e., 20% of the population will eventually die of cancer. Now the fatal cancers supposedly caused by CT scans, using the Archives of Internal Medicine data, constitutes an individual risk of about 0.025%; i.e., if exposed to such CT scans, one's average likelihood of contracting fatal cancer rises from 20% to 20.025%. Not quite so frightening any more.
Another way of looking at the predicted risk of death from cancer due to CT scans is to compare it with actuarial risks of death from other common human activities. For example: the risk of death from one typical CT scan—if we accept the Archives of Internal Medicine Journal’s conclusions for now and their estimate of 1 rem (10 msV) effective dose per CT scan-–is actuarially equivalent to the risk of death from smoking 220 cigarettes, drinking 360 bottles of wine, being exposed to air pollution by living in New York or Boston for 4 years, living for 14 years in Denver, or traveling 40,000 miles by car. From that perspective, the risk of death from fatal cancer contracted from a typical CT scan–-which is usually associated with a benefit in medical outcome–-again, doesn’t appear quite as threatening.
So, why the overplayed response by the media and efforts at damage control by the medical and medical physics communities? An important point of concern to radiologists and physicists—both experts on radiation effects associated with medical procedures-–is the statistical interpretation of the cancer deaths ostensibly caused by CT.
Most of the data linking radiation dose to fatal cancer come from observations of the effects of the Hiroshima and Nagasaki nuclear detonations in 1945, with additional data coming from studies of cancer in nuclear power plant workers in the U.K., patients exposed to x-ray fluoroscopy in U.S. and Canadian tuberculosis sanatoria between 1925 and 1954, and x-ray treatments of patients with ankylosing spondylitis (congenital fusion of the spine) in the United Kingdom. Typical radiation doses produced by CT scanning, however, are roughly 20-500 times lower than the lowest of the dose levels in the historical data referred to above. Using the historical data produced at substantially higher dose levels to extrapolate backwards to CT’s much lower dose levels, requires assumptions of the linearity of the dose-effect relationship over the very wide range of dose levels, so only shaky statistical estimates can be inferred for the corresponding associated cancer deaths at typical dose levels for CT.
The BEIR-VII Committee, the dominant U.S. scientific body dealing with human effects of radiation, recently stated that “at typical CT dose levels, statistical limitations make it difficult to evaluate cancer risk in humans”. This is committee-speak for “any estimates of cancer deaths caused by CT must be considered very shaky!”
To be fair, the Archives of Internal Medicine paper did include some discussion of inherent uncertainties in the data that were presented, but it failed to emphasize strongly enough the statistically shaky properties of those back-extrapolated data—exactly the point made by the BEIR-VII Committee. However, the media in reporting the paper’s conclusions omitted to mention the BEIR-VII report’s low confidence in the putative cancer deaths caused at the relatively low dose levels produced by CT. Finally, the paper’s authors underplayed, and the media reports failed to emphasize, the far larger number of patients who medically benefit from CT scans compared to those that would be statistically expected to die from cancer ostensibly caused by CT scans; that is, the very high benefit vs. risk ratio of CT was largely ignored in favor of focusing on the admittedly newsworthy aspects of the cancer deaths produced by CT.
So what is the takeaway message? Do not be frightened by the reported ostensible risks of CT, but make sure that you have CT scans done in a facility certified by the American College of Radiology, and one that follows its official and rigorous recommendations for minimizing CT dose. In addition, make sure your doctor explains to you clearly the potential benefit of a CT scan, and that a medically useful outcome will result.
According to a report in the Archives of Internal Medicine, 57 million CT scans done in the U.S. are expected to produce 14,500 future cancer deaths in the patient population scanned. The epidemiology of radiation effects is complicated and is subject to frequent misrepresentation. Although the projected number of deaths from the CT scans seems very frightening, they should be put into proper perspective. Given that the baseline fatal cancer rate in the U.S. and Europe is roughly 20%, the fatal cancer rate due to CT scans (if the report’s conclusions are to be believed) raises this number from 20% to 20.025%. If the conventional epidemiological radiation risk/benefit model is invoked, one typical CT scan is actuarially equivalent to dying from cancer by living in Boston or New York for 4 years (due to air pollution). Finally, epidemiologists agree that at the relatively low doses typical of CT, the corresponding risk estimates are fraught with extremely large errors. In fact, the BEIR-VII Committee, the dominant U.S. scientific body dealing with human effects of radiation, recently stated that “at typical CT dose levels, statistical limitations make it difficult to evaluate cancer risk in humans”.
In conclusion, the report of 14,500 projected cancer deaths due to CT scans done in the U.S in one year are based on extremely unreliable data and should be interpreted accordingly. Even if the reported cancer death rate from CT were true, percentage-wise this represents for a single individual an increase in risk from 20% to 20.025%; an increase that would most likely be buried in statistical "noise" and have very little meaning.
So do not be concerned by the reported statistical fatal cancer risks of CT, but make sure that you have CT scans done in a facility certified by the American College of Radiology, and one that procedurally follows the ACR’s official and rigorous recommendations for minimizing CT dose to patients. In addition, make sure your doctor clearly explains to you the potential benefit of the CT scan you are about to have, and what medically useful outcome might result.
The paper referred to in this article can be found in: Arch Intern Med. 2009 Dec 14;169(22):2071-7
About the Author