Robert G ZAMENHOF, PhD
1. THE TRUTH BEHIND RED MERCURY
2. GLOBAL POSITIONING TECHNOLOGY: HISTORY & SCIENTIFIC
3. THE JOINT COMPREHENSIVE PLAN OF ACTION NUCLEAR
AGREEMENT WITH IRAN: WHAT IRAN AND THE U.S. HAVE AGREED
UPON & THE SCIENCE BEHIND IT (as of December 2015)
4. NATURAL RADIATION: WHERE DOES IT COME FROM & WHAT DOES
IT DO TO US?
5. PRINCIPLES OF OPERATION & POTENTIAL HEALTH RISKS OF
AIRPORT X-RAY SCANNERS
6. HOW DANGEROUS IS RADIATION TO HUMANS—OR IS IT?
7. COLD FUSION: A PROMISE OF GLOBAL SALVATION OR A HUGE
8. DEPLETED URANIUM: HEALTH RISKS IN MILITARY AND CIVILIAN
9. MODERN TECHNOLOGICAL DEVELOPMENTS IN RADIATION
10. DARK MATTER & DARK ENERGY: SERIOUS OR STAR TREK?
11. RADIOMETRIC DATING
12. DO CT SCANS REALLY KILL PATIENTS?
There are totally unsubstantiated reports that shortly before the Soviet Union's demise, the Soviet government decided to rid itself of all its supplies (if such supplies ever existed) of Red Mercury to protect the nation from terrorist elements that might utilize Red Mercury to manufacture ultra-compact ("pocket") fusion weapons. The Red Mercury was reputedly eliminated by packaging small quantities of it within small household appliances, such as toasters, sewing machines, etc., which were then exported out of the Soviet Union. The contorted logic of such a decision by a world power defies common sense; but not, apparently, for the ISIS buyers who were ordered to purchase such household items from the Soviets to gain ownership of Red Mercury hidden inside them, the "doomsday" material that could potentially change the face of terrorism throughout the world. This picture shows some Arab believers contemplating two sewing machines that ostensibly contain Red Mercury. Such Red Mercury containing appliances are still today being sold for hundreds of thousands of dollars to the dumb and ignorant on the international terrorism markets.
C.J. Chivers, in the New York Times, wrote an excellent, though scientifically lay-level introduction to the present interest by ISIS in obtaining supplies of the “doomsday material of dreams” called Red Mercury. I would like to copy and paste his excellent introduction to this topic before delving more deeply into the science.
The hunt for the ultimate weapon began in January 2014, when Abu Omar, a smuggler who fills shopping lists for the Islamic State, met a jihadist commander in Tal Abyad, a Syrian town near the Turkish border. The Islamic State had raised its black flag over Tal Abyad several days before, and the commander, a former cigarette vendor known as Timsah, Arabic for ‘‘crocodile,’’ was the area’s new security chief. The Crocodile had an order to place, which he said he had received from his bosses in Mosul, a city in northwestern Iraq that the Islamic State would later overrun.
Abu Omar, a Syrian whose wispy beard hinted at his jihadist sympathies, was young, wiry and adaptive. Since war erupted in Syria in 2011, he had taken many noms de guerre — including Abu Omar — and found a niche for himself as a freelance informant and trader for hire in the extremist underground. By the time he met the Crocodile, he said, he had become a valuable link in the Islamic State’s local supply chain. Working from Sanliurfa, a Turkish city north of the group’s operational hub in Raqqa, Syria, he purchased and delivered many of the common items the martial statelet required: flak jackets, walkie-talkies, mobile phones, medical instruments, satellite antennas, SIM cards and the like. Once, he said, he rounded up 1,500 silver rings with flat faces upon which the world’s most prominent terrorist organization could stamp its logo. Another time, a French jihadist hired him to find a Turkish domestic cat; Syrian cats, it seemed, were not the friendly sort.
War materiel or fancy; business was business. The Islamic State had needs, it paid to have them met and moving goods across the border was not especially risky. The smugglers used the same well-established routes by which they had helped foreign fighters reach Syria for at least three years. Turkish border authorities did not have to be eluded, Abu Omar said. They had been co-opted. ‘‘It is easy,’’ he boasted. ‘‘We bought the soldiers.’’ This time, however, the Crocodile had an unusual request: The Islamic State, he said, was shopping for red mercury.
Abu Omar knew what this meant. Red mercury — reputedly precious and rare, exceptionally dangerous and exorbitantly expensive, its properties unmatched by any compound known to science — was the stuff of doomsday daydreams. According to well-traveled tales of its potency, when detonated in combination with conventional high explosives, red mercury could create the city-flattening blast of a nuclear bomb. In another application, a famous nuclear scientist once suggested it could be used as a component in a neutron bomb small enough to fit in a sandwich-size paper bag.
Abu Omar understood the implications. The Islamic State was seeking a weapon that could do more than strike fear in its enemies. It sought a weapon that could kill its enemies wholesale, instantly changing the character of the war. Imagine a mushroom cloud rising over the fronts of Syria and Iraq. Imagine the jihadists’ foes scattered and ruined, the caliphate expanding and secure.
Imagine the price the Islamic State would pay.
Abu Omar thought he might have a lead. He had a cousin in Syria who told him about red mercury that other jihadists had seized from a corrupt rebel group. Maybe he could arrange a sale. And so soon Abu Omar set out, off for the front lines outside Latakia, a Syrian government stronghold, in pursuit of the gullible man’s shortcut to a nuclear bomb.
[C.J. Chivers, New York Times]
The Science Behind Red Mercury's "Awesome Powers"
There is nothing “magic” or “doomsday” about Red Mercury. The red-orange appearance of this material is simply due to its compounding with iodine as mercury-iodide. None of the isotopes of the element mercury are fissile. This means that they cannot be “split” by absorbing neutrons the way uranium-235 or plutonium-239 can. It is the splitting of a heavy isotope into two lighter isotopes that releases the extraordinary yield of energy that is the basis of the fission nuclear reaction, exemplified by the two nuclear bombs, Fat Man and Little Boy, dropped on Hiroshima in December 1945. Nor can Red Mercury have any role in nuclear fusion, since the process of nuclear fusion requires the nuclear coalescence under very high temperature and pressure of two very light isotopes, such as hydrogen, helium, deuterium, or tritium – which are about 100 times lighter than mercury.
A brief explanation of the operating principle of thermonuclear (fusion) weapons is useful to explain the contorted pseudoscience behind Red Mercury.
In the cold-war days, when the U.S. believed that there was a high likelihood that its thermonuclear weapons arsenal would in fact be used, the light isotopes that were used as the fusion fuel in its thermonuclear bombs and warheads were deuterium (hydrogen with an additional neutron) and tritium(hydrogen with two additional neutrons). A primary fission (uranium-235 or plutonium-239) stage of a thermonuclear weapon when triggered would create the enormously high temperatures and pressures required for the secondary stage of the weapon – the fusion stage – to be set off. Unfortunately, tritium is a radioisotope with a half-life of 12.5 years, so every 12 or so years the potential explosive power of the U.S. thermonuclear weapons arsenal was diminished by about 50%, and the weapons had to be recharged with fresh tritium. This was not a simple procedure considering the thousands of nuclear bombs and warheads spread out over the U.S. that required this attention, as well as the slow production rate of fresh tritium in the Savannah River nuclear reactor.
Towards the end of the cold war, both the Soviet Union and the U.S. developed a solution to this problem: Instead of deploying deuterium and tritium as the fusion isotopes in thermonuclear bombs and warheads, the compound lithium-6-deuteride was used instead. Lithium-6 has the property of avidly absorbing neutrons generated by the primary fission stage of a thermonuclear weapon. After doing so, lithium-6 disintegrates producing the isotope tritium. Therefore, the supply of deuterium and tritium for the subsequent fusion reaction was made immediately available but only when needed at the moment the initial fission stage of the weapon was triggered. This was a big step forward in the design of thermonuclear weapons, and there is some credible evidence that in the Soviet Union’s defense establishment the code name for tritium-6-deuteride was Red Mercury.
Pseudoscientific theories have abounded as to the utility of Red Mercury for weapons of mass destruction. One totally baseless theory asserts that with the catalytic presence of Red Mercury, the enrichment of natural uranium to weapons-grade uranium-235 can occur at a much faster rate than with the more conventional centrifuge technology. Another theory claims that if Red Mercury is subjected to very high pressures, it releases extraordinarily large amounts of heat, and could, therefore, replace the initial fission stage of a thermonuclear weapon by triggering the secondary fusion stage of the thermonuclear reaction. Yet another theory claims that if Red Mercury is irradiated for extensive periods of time in a nuclear reactor, radiation is absorbed (according to some unknown scientific principle) by the Red Mercury molecules. Then, if subjected to mechanical shockwaves, the absorbed energy in the Red Mercury is released and can either function as an independent weapon, or can replace the initial fission stage of a thermonuclear weapon, resulting in a physically much more compact and safe device prior to its detonation. However, there is absolutely no scientific support for Red Mercury having any of the properties described above. On a slightly humorous point, many centuries ago, mercury was the most common element used by alchemists to create "new" elements.
The only evidence of world powers being involved in any way with Red Mercury is that possibly toward the end of the cold war, the Soviet Union coined Red Mercury as a code word for lithium-6-deuteride, the fusion fuel for modern thermonuclear weapons.
There is absolutely no scientific support to Red Mercury having any role either in the accelerated enrichment of uranium or in the construction of thermonuclear weapons. Consequently, Red Mercury is probably one of the more audacious international hoaxes perpetrated to date. ISIS, nevertheless, has believed this hoax and has attempted to scare the rest of the world into believing that they have developed, or are in the process of developing, a “doomsday weapon of mass destruction” based on the totally fabricated properties of Red Mercury. Perhaps the idea of Red Mercury and its ostensibly "awesome" properties is a modern version of alchemy! But in a sense the world should be grateful that due to ISIS’s continuing naïve belief in the Red Mercury hoax, it has squandered enormous amounts of its financial resources to buy up supplies of Red Mercury from international con men.
Parenthetically, one wonders where ISIS has decided to dump all the toasters, sewing machines, and other domestic appliances after the Red Mercury (that supposedly was "hidden" inside such appliances) had been removed that it had spent millions of dollars purchasing. Well, see below...
U.S. GPS satellite, orbiting earth at a distance of 12,550 miles (5% of the distance to the moon). Solar panels provide power. Satellite also contains rocket motors to enable fine repositioning, and nuclear detonation detection "listening" devices so the U.S. military can immediately and precisely locate a nuclear detonation anywhere on earth; useful for test-ban treaty compliance.
Trajectories of the 31 active U.S. GPS satellites circling the earth, twice every 24 hours. At least 6 satellites are visible at any time from any position on the earth's surface. For example, at the head of the arrow 9 satellites are visible. Only 3 visible satellites are theoretically required to determine location, so the additional 6 visible satellites creates a strong redundancy and improved accuracy.
Global Positioning Systems (GPS) have been a familiar consumer technology for the past decade, but the ingenuity and great complexity of GPS is often belied by its intuitively simple principles. This article describes the principles of how GPS in the U.S. has evolved from the early OMEGA Navigation System in the 1970s, into the highly sophisticated 34-satellite GPS system launched in 1989. Current International GPS functionality is based on a combination of U.S. and Russian federation satellites, that results in a potential localization accuracy of approximately 6 meters (about 20 feet). For military applications such as missile guidance, far greater accuracy is obtainable. Although the most frequent use of GPS is in the consumer market, GPS technology, since 2009, has been in extensive use by the U.S. military.
In the 1970s, the Omega Navigation System developed by the U.S. became the progenitor of today’s GPS technology. Omega was based on receivers comparing the time-of-arrival of signal transmissions from pairs of ground-based transmitting stations and calculating the receiver’s (for example, an aircraft) position by the triangulation principle. Omega became the first worldwide radio navigation system; but, as its technology evolved into three-dimensions using space-based satellites, its access became restricted to military use.
The first U.S. GPS satellite was launched in 1989, and the 24th–and last–in 1994. Initially, the civilian sector did have limited access to 3D GPS technology, but the signals available were intentionally degraded by the defense department to restrict the use of the fully accurate GPS system for weapons guidance and other military applications by unfriendly powers.
In 1996, recognizing the importance of GPS for civilian air navigation, President Clinton issued a policy directive declaring GPS to be a “dual-use system” and established an Interagency GPS Executive Board to manage it as a national asset.
With the removal of the intentional degradation of the GPS signals, the accuracy of civilian-accessible GPS improved from about 300 feet to about 60 feet. In 1998, with aviation civil navigation in mind, Vice President Gore announced plans to upgrade the GPS system with two additional civilian channels for enhanced accuracy and reliability, and in 2000 the U.S. Congress authorized and funded the effort.
Current Status of GPS
The upgraded U.S. GPS system, consisting of 24 orbiting satellites, is divided into six orbital planes with four satellites in each plane. An orbital plane is like a circular disk, where the edge of the disk defines the path of the satellite. The centers of all these orbital planes coincide with the center of the Earth (as shown in the 2nd picture above at a moment in time when 9 satellites are visible at the point on the earth's surface shown by the head of the arrow). The orbits were designed so that at least six satellites would always be visible within line-of-sight from anywhere on the earth’s surface. Since only three satellites are theoretically needed to establish a receiver’s three-dimensional location, this provided a lot of redundancy to compensate for transmissions blocked by building structures, malfunctioning satellites, etc. Orbiting at an altitude of 12,550 miles (about 5% of the distance from the earth to the moon), each satellite makes two complete orbits every 24 hours, covering the same projected track on the ground every day. From spherical geometry, this means that 4 satellites are visible from each location on the earth’s surface for a few hours a day. This makes positioning a much faster process. As of March 2008, there have been in fact 33 GPS satellites in orbit--31 active and two in retirement that are maintained as spares. In this new arrangement, about ten rather than six satellites are visible from any location on the ground at any moment in time, providing additional redundancy and increased accuracy of the GPS system.
The flight-paths of GPS satellites are tracked by dedicated U.S. Air Force monitoring stations in Hawaii, Kwajalein, Ascension Island, Diego Garcia, Colorado Springs, Colorado, and Cape Canaveral, along with shared monitoring stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington DC.
Each satellite is contacted at regular intervals with orbital and software updates using ground antennas. These updates synchronize atomic clocks on board the satellites and correct the satellites’ orbits when necessary. Atomic clocks, the most accurate timing technology in existence, provide the very high timing accuracy needed to synchronize the transmission of GPS radio signals from different satellites and to account for the so-called “Doppler shift,” a discrepancy in timing that occurs when a satellite transmits its signal while moving toward or away from the receiver (the Doppler shift is the familiar change in pitch of ambulance sirens as they move toward and away from a listener). Additionally, changes in atmospheric conditions and air humidity affect the velocity of the GPS signals as they pass through the earth’s atmosphere. Differences in receiver altitude introduce further discrepancies in timing due to the signals passing through less thickness of atmosphere at higher receiver elevations, but this effect is more relevant to aircraft navigation systems than to land-based navigation systems.
GPS localization accuracy can also be affected when the radio signals reflect off surrounding buildings, canyon walls, hard ground, etc., due to reflective phase-shifts in the radio frequency signals. Finally, the GPS satellites’ atomic clocks also suffer errors due to Einstein’s general relativity principle. When different satellites viewed by a GPS receiver are moving relative to the GPS receiver at different speeds and/or experiencing different gravitational forces by virtue of different altitudes, their clock speeds will slightly vary. With all the above corrections carefully accounted for, the accuracy of civilian GPS systems today is about 6 meters (about 20 ft.), although for strictly military uses such as missile guidance, the accuracy is substantially improved.
Inclusion of WiFi Transmissions in GPS Location
More recently, in addition to receiving GPS signals from orbital satellites, some GPS receivers can also detect WiFi signals originating in their close vicinity. The locational origins of these WiFi transmissions are determined by referral to a publicly accessible WiFi database, and by the process of triangulation, the WiFi-determined location of the GPS receiver is found. This is a better system for positioning when the receiver happens to be indoors, in a tunnel, among very tall buildings, etc., since GPS signals experience degradation when passing through masonry or concrete.
Russian Federation’s GLONAS GPS System
In 1995, the Russian Federation’s satellite navigation system became operational. The system had the acronym GLONAS, which stands for Globalnaya Nvigatsionnaya Sputnikovaya Sistema. Loosely translated into English, this means Global Navigation Satellite System. GLONAS was initially operated by the Russian Federal Space Agency. As with the declassification of the USA’s GPS system, President Vladimir Putin in 2007 ordered all military restrictions to be removed from the GLONOS system so that it could be used by the civilian sector as well as by the military. At the present time, the accuracy of the Russian Federation’s GLONOS system is about 7-8 meters (20-25 feet), similar to the current fully unrestricted civilian accuracy of the US GPS system.
Sweden’s SWEPOS GPS System
In 2011, Sweden’s SWEPOS network of satellite reference stations, which provide data for real-time positioning, became the first known foreign company to use the Russian Federation’s GLONAS system. In Sweden, the accuracy of GLONOS as integrated to the SWEPOS system is about 1 meter (about 3 feet).
An obvious question is how accurately do GLONAS and the U.S. GPS agree with each other. Interestingly enough they don’t! This is because as an absolute origin of its coordinate system, GLONOS uses the North Pole’s global position as measured in 1990 by satellite interferometry, while the U.S. uses the North Pole’s global position as measured in 1984. These two measurements differ by approximately 40 cm (1-1.5 feet).
It is interesting to note that as part of its “Maps” application, Apple iPhones and iPads use both the U.S. GPS and the GLONAS satellites for location.
Military Uses of GPS
The list shown below shows the many applications of GPS by the U.S. military.
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. Commander ranks use the Commanders Digital Assistant, while lower ranks use the Soldier Digital Assistant.
Target tracking: Various weapons used by the U.S. military use GPS to track possible ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, also use GPS to locate targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery projectiles. Embedded GPS receivers, able to withstand accelerations of up to 12,000g have been developed for use in 155-millimeter howitzers.
Reconnaissance: Patrol movement can be managed more closely using GPS.
Nuclear detonation location: U.S. deployed GPS satellites each carry a set of nuclear detonation detectors, consisting of an optical sensor, an x-ray sensor, a dosimeter, and an electromagnetic pulse sensor, that form a major portion of the United States Nuclear Detonation Detection System.
As mentioned earlier, the implementation of GPS technology by the U.S. military, although utilizing the same satellites as those used for civilian GPS, process the satellites' signals differently, resulting in about a factor of 10-50 times greater precision.
Global Positioning Systems have been a familiar consumer technology for the past decade or more. The U.S. GPS system evolved from the early OMEGA Navigation System of the 1970s into the highly sophisticated 34-satellite GPS system launched in 1989. Following the declassification of the U.S. and Russian Federation’s GPS systems by President Clinton and President Putin, and the increased technological collaboration between these two countries, the current International civilian GPS functionality results in potential localization accuracy of approximately 6 meters (about 20 feet). The civilian Swedish SWEPOS system of ground-based GPS receivers, integrated with the Russian Federation’s GLONAS system, produces an impressive localization accuracy of about 40 cm (or about 1-1.5 feet). Military uses of GPS have existed since 2009, and include smart-bomb, howitzer shell, and guided missile guidance, target tracking, and navigation by soldiers in unfamiliar territory or at night. The military GPS receivers process the GPS satellite signals differently than those used by civilians and are able to improve the accuracy of GPS navigation by factors of 10-50 times.
THE JOINT COMPREHENSIVE PLAN OF ACTION (JCPOA) NUCLEAR AGREEMENT WITH IRAN: WHAT iran and the U.S. have AGREED UPON AND THE SCIENCE BEHIND IT--as it was in FEBRUARY 2015
Gas centrifuge "farm" for separating uranium-238 from uranium-235. The vertical tubes are the centrifuges, having internal elements that spin very fast along the vertical axis. Uranium Hexafluoride gas (with natural uranium) is passed into the centrifuges. Uranium-235 and uranium-238 are then separated by virtue of the slightly different centripetal forces acting on the two slightly different nuclear masses. Uranium-238 becomes concentrated at a larger radius within the centrifuge than uranium-235. The two uranium isotopes are then separately extracted from the centrifuges.
Posted on April 3, 2013 by Dr Simple Science
Iran has always claimed that it is enriching uranium as a necessary step toward providing various civilian services, such as radioisotopes for nuclear medicine, a civilian nuclear power program, and a civilian nuclear research program. However, this has clashed with widespread international belief that Iran’s claims are simply a cover for a much more nefarious goal, that of joining the Nuclear Club of nations possessing nuclear weapons–-without being invited, and, more specifically, for "wiping Israel off the face of the earth".
The recently agreed upon Joint Comprehensive Plan of Action (JCPOA) between Iran and the U.S., together with five of the other principal nuclear nations, has caused a kerfuffle in the U.S. Congress and strong condemnation by Israel. Israel's Prime Minister, Bibi Netanyahu, is highly skeptical about the value of the JCPOA agreement and still insists that the U.N. “draw a red line” beyond which Iran’s nuclear development should not be tolerated by the international community and, if violated, might result in preemptive strikes by Israel against Iran's uranium enrichment and plutonium conversion facilities--as has already occurred on two previous occasions. Mr. Netanyahu has also warned that as early as this summer, despite the JCPOA agreement, Iran’s uranium enrichment and nuclear warhead fabrication facilities are expected to be moved to dispersed underground locations, making it far more difficult, if not impossible, to achieve successful verification, therefore rendering the JCPOA agreement largely ineffective.
Since uranium enrichment by Iran and its progress towards nuclear weapon acquisition has produced a substantial amount of public fear in past years, I will try to clarify some of the scientific facts behind these issues.
Summary of the JCPOA Agreement Between the U.S. and Allied Nations and Iran
* The primary uranium-235 enrichment site in Iran is Natanz (see map above). Under the JCPOA agreement, Natanz will be permitted to operate 5,060 uranium enrichment centrifuges. This is 25% of Iran's 20,000 currently operating centrifuges and will constitute older, much less efficient models that will be far slower in enriching uranium than the current ones.
* Iran's uranium stockpile of uranium will be reduced by 98% to 300kg (660lbs) for 15 years, and will not be allowed to exceed an enrichment level of 3.67% (see below for further discussion of enrichment).
* The Arak reactor (see map above), which according to its original design would have been a source of fissile plutonium-239 for manufacturing at least one nuclear weapon per year, will be transformed to produce far less plutonium than before and of a poorer quality. Fundamentally, this would limit plutonium-239 production and make it virtually impossible to fabricate any plutonium-239 based nuclear weapons.
* All spent fuel from the Arak reactor that could potentially be reprocessed to recover plutonium-239 will be sent out of the country under a rigorous IAEA inspection protocol. In fact, Iran will ship out all spent fuel from all of its power and research reactors, preventing the accumulation of any spent fuel from which plutonium-239 could be extracted, and will not engage in any activity associated with the reprocessing of spent fuel to obtain plutonium-239, even for research purposes.
U.S. and the 5 Allied Nations' Obligations
* On the part of the U.S. and the five other allied nations in the JCPOA agreement, the commitment given to Iran in response to Iran's acceptance of the above conditions is to end the severe sanctions that were imposed on Iran in 2010. Quoting a memorandum from the U.S. State Department, "These sanctions were designed: (1) to block the transfer of weapons, components, technology, and dual-use items to Iran’s prohibited nuclear and missile programs; (2) to target select sectors of the Iranian economy relevant to its proliferation activities; and (3) to induce Iran to engage constructively, through discussions with the United States, China, France, Germany, the United Kingdom, and Russia in the “E3+3 process,” to fulfill its nonproliferation obligations".
Specific Technology of Uranium and Plutonium Processing for Nuclear Reactor Fuel and Nuclear Weapons Manufacture
Uranium mined from the ground contains various uranium isotopes, including uranium-235, the uranium isotope needed for the manufacture of fission nuclear weapons and nuclear reactor fuel. The "raw" uranium is first combined with hydrofluoric acid, which reacts with the uranium to produce uranium hexafluoride gas. The uranium hexafluoride gas is fed into centrifuges that spin the gas at extremely high speed (see the second picture above). By centripetal force, the heaviest uranium isotope, uranium-238, is forced to the outer wall of the centrifuge where it is extracted, while uranium-235 remains along the central axis of the centrifuge where it is separately extracted. However, this approach to uranium-235 separation is notoriously slow, so to offset this problem a very large number of very large advanced design centrifuges operating simultaneously is required (see third picture above). Limiting the number and efficiency of Iran's centrifuges is the first step in the JCPOA agreement in preventing Iran from enriching uranium to the level where fission nuclear weapons could be manufactured. The next step is limiting the enrichment level of Iran's entire stockpile of uranium to prevent its further enrichment to weapons grade uranium. The final step is to modify the cores of those nuclear reactors which are capable of rapidly producing high quality plutonium-239, which can also potentially be manufactured into fission nuclear weapons, and sending out of the country all spent reactor fuel rods to prevent Iran from extracting plutonium-239 from the burned fuel.
Principles of Uranium Enrichment and Uranium-235 Fission Rates for Nuclear Reactors vs. Fission Weapons
"Raw" or “natural” uranium, as it comes out of the ground, consists of about 0.7% uranium-235 and about 99% uranium-238 (there are a few unimportant additional isotopes of uranium present that account for the remaining 0.3%). Although the uranium-235 and uranium-238 uranium isotopes are chemically identical, uranium-235 is "fissile", whereas uranium-238 is not. Uranium-235 is the only naturally occurring fissile isotope that is suited for the manufacture of nuclear weapons and nuclear reactor fuel.
A fissile isotope is one whose nucleus can be induced to break apart, or “fission”, following bombardment by nuclear particles called neutrons. In the case of uranium-235, the nucleus absorbs a neutron that makes it unstable and causes it to break apart. In doing so, it emits 2-3 outgoing neutrons, accompanied by a tremendous amount of energy, mainly as heat and light, but also as ionizing radiation (the kind of radiation that is potentially harmful to humans). Each of the 2-3 outgoing neutrons can now be considered as 2-3 incoming neutrons in the next uranium-235 nuclear interactions, that once again each produce 2-3 more uranium-253 fissions with the emission of 2-3 more outgoing neutrons and more energy. Therefore, the uranium-235 fissions-rate grows "exponentially" with time and, if not controlled, continues until all uranium-235 has been used up.
Uranium-235 for Nuclear Reactor Fuel
Since for each uranium-235 nucleus to fission requires one neutron to be absorbed—i.e., removed from the neutron population--each fission event increases the neutron population by 1-2 neutrons (2-3 neutrons produced in each fission, minus the one neutron that is absorbed to initiate each fission). If each fission event on average absorbed one neutron and emitted only one neutron that could initiate the next fission, the neutron population with time would remain roughly constant; this is the safe, “steady-state” condition under which nuclear reactors typically operate and produce energy. But to ensure that steady-state condition, one of the outgoing neutrons, on average, in each fission event must be "blocked" from causing further fissions. This blocking to some extent occurs naturally, due to the presence of uranium-238, which can absorb neutrons but does not undergo fission. But it is also achieved in a more controlled way using "control rods", which can be inserted into the uranium fuel to various depths. Control rods contain the isotope boron-10, which very strongly absorbs neutrons. Therefore, the fission rate in the uranium fuel is "tuned" by the precise positioning of the control rods in the core to maintain a safe and constant fission rate. If, on the other hand, the fission-rate were to increase exponentially in a nuclear reactor’s core, dangerous overheating could result, with the further possibility of more serious consequences such loss of core coolant followed by core meltdown. This is what happened in the Japanese Fukushima daiichi reactor accidents in 2011.
Uranium-235 for Nuclear Fission Weapons
In contrast, to manufacture uranium-235 fission weapons, we want the maximum amount of fission energy produced in the shortest possible time; which means we need to let the fission rate rapidly increase unchecked until all the uranium-235 has been used up. This means that following "triggering" of the fission process, the uranium-235 fission-rate would increase at a higher and higher rate, i.e., "exponentially". In a nuclear fission weapon, the lack of control rods is insufficient to ensure that the growth in fission-rate is as rapid as possible. But because, as already mentioned, natural uranium is mainly composed of neutron-absorbing--but non-fissile--uranium-238, potential problems are encountered in trying to utilize uranium fission for nuclear weapons. The large amount of uranium-238 significantly “eats up” the fission neutrons produced by the uranium-235, down-regulating the uranium-235 fission-rate just as control rods do in a nuclear reactor core. So the presence of uranium-238 is highly undesirable if the uranium is to be used for a nuclear weapon where the fission-rate must increase as rapidly as possible. Therefore, for the manufacture of nuclear weapons, the uranium-235 enrichment level is increased to at least 20%, but more commonly to 90% or higher, to minimize the deleterious (sometimes referred to as "poisoning") effect of uranium-238.
[For similar reasons, although raw or natural uranium (with a uranium-235 fraction of about 0.7%) could, in principle, be used as a cheap and very safe fuel for nuclear reactors, much better fuel-to-energy conversion efficiency is obtained with slightly higher enrichment levels of 0.9-2%.]
The Politics of Uranium Enrichment
From the perspective of the recent JCPOA agreement, if Iran were committed to maintaining its 660 lb stockpile of uranium at an enrichment level of 20% or lower--as it was permitted to do in a previous deal proposed by Russia and initially accepted by Iran about 10 years ago--there is a theoretical possibility that Iran still could manufacture nuclear fission weapons; but more realistically, it could use its outdated 5,060 permitted centrifuges to moderately elevate the enrichment of the stockpiled uranium to bring it into a range where nuclear fission weapons could more easily be manufactured. To prevent this from possibly occurring, as part of the JCPOA agreement Iran's stockpile of uranium is not permitted to exceed an enrichment factor of 3.67%; from which, as already mentioned, it would be impossible to manufacture nuclear fission weapons.
However, for the operation of nuclear reactors that can produce plutonium-239 from uranium-238 (plutonium-239 is an alternative fissile isotope suitable for nuclear fission weapon manufacture), a 3-4% uranium enrichment level is ideal, maximizing the efficiency and, therefore, the speed of uranium-to-plutonium conversion. That is why a component of the JCPOA agreement consists of disabling the ability of Iran's nuclear reactors from the production of plutonium-239.
Iran’s uranium enrichment issue, together with the “hair trigger” of Israel’s decision of possibly preempting an Iranian nuclear attack on Israel by a destruction of all of Iran’s nuclear capabilities, is probably the most dangerous military crisis the U.S. has faced since the Cuban missile affair. We can only hope that the recent diplomatic progress between the U.S. and Iran will lead to a nuclear stand-down between Iran and the Western powers and Israel to give the world another decade or two of nuclear peace.
Uranium-235 can be enriched to higher percentages that the 0.7% level in naturally occurring uranium. Up to about 20% enrichment, the uranium can only be used as fuel for nuclear fission reactors, but at enrichments levels greater than 20%, the possibility exists that the uranium can be used to manufacture nuclear fission weapons. Much lower uranium enrichment levels of 3-4%--permitted under the current JCPOA agreement--though precluding the direct manufacture of nuclear fission weapons, if used as nuclear reactor fuel could, in principle, lead to the rapid production of plutonium-239, a fissile isotope of plutonium from which nuclear fission weapons can be manufactured as they can from uranium-235. Under the JCPOA agreement "Export" of used nuclear fuel rods out of Iran should, in principle, prevent this from happening.
The concepts of 1) a steady-state fission-rate, maintained through the use of control rods in nuclear reactors, and 2) an exponentially increasing fission-rate, desirable in nuclear fission weapons, are both determined by the fate of the additional neutron produced with each uranium-235 fission, over and above the one outgoing one neutron that is necessary in a steady-state fission rate. With that "extra" neutron absorbed either by control rods or by the uranium-238 present in the uranium, a steady-state fission-rate is achieved, which is the operation mode of nuclear reactors. However, if that extra neutron is not removed and becomes available to cause further uranium-235 fissions, an exponentially increasing fission-rate results results which, if allowed to proceed unhindered, results in a nuclear fission explosion.
The 3.67% enrichment level of Iran's stockpile of uranium-235, permitted under the JCPOA agreement, although too low to enable nuclear fission weapon manufacturer (as well as being precluded from being further enriched by the restrictions on the number and type of the centrifuges Iran is permitted to retain) can, nevertheless, be manufactured into very efficient nuclear reactor fuel. Such efficiently fueled nuclear reactors can potentially rapidly produce plutonium-239, from which nuclear fission weapons can also be manufactured. However, one component of the JCPOA agreement consists of disabling the nuclear reactors that are capable of the rapid production of high-quality plutonium-239, and in addition requiring the export of all burnt up fuel to prevent the covert extracting of plutonium-239 from the uranium fuel residue.
Many countries, in particular Israel, consider the JCPOA agreement much too favorable to Iran, with too many loopholes permitting Iran's continued development of nuclear weapons; but, as president Obama has stated, "It was either this deal or no deal". The world can only hope and pray (if that is our disposition) that this agreement will be honored by both sides, which would greatly alleviate the international nuclear tensions that presently exist.
Sources of natural (background) radiation are cosmic rays from space, gamma rays from radioactive soil and rocks, alpha particles produced by radon gas, and our own radioactivity within our bodies. Background radiation depends tremendously on geography, altitude, and habitat architecture. The biological effects of radiation are so small that at such low levels of background radiation, no detrimental effects in humans have ever been observed.
Radioactive decay chain of natural radioisotopes within the earth. Starting with uranium-238, with a half-life of 4.5 giga-years (4500,000,000 years-about the same as the age of the earth itself), which has been present since the earth was formed. The decay chain passes through radium-226 and radon-222 (shown in yellow), which the two radioisotopes now responsible for terrestrial background radiation and radon gas exposure.
The world of radiation, radioactivity, and radiation risks is pretty complex and is generally left to the domain of radiological physicists (such as myself) to explain the intricacies of this field; but unfortunately, radiation topics are also frequently discussed in the press and elsewhere by those who do not understand the science, leading to a lot of misunderstandings. We will attempt to rectify this situation!
In this article we will talk about the origins of various components of "natural" radiation--sometimes referred to as "background" radiation. Natural radiation is composed of different kinds of radiation: cosmic radiation from space, influenced by our elevation and our proximity to polar regions, natural radiation from the ground, resulting from the decay products of primordial radioisotopes that were formed during the big-bang, a radioactive gas that oozes out of the earth and building materials and which irradiates the insides of our lungs when we breath it, and naturally occurring radioisotopes trapped in our own bodies. The last of these includes two radioisotopes, created by normal nuclear reactions in our atmosphere, which are incorporated into every organ and tissue in our bodies and irradiate us from within. We will also talk about the relative risks of natural radiation vs. other common everyday activities, such as drinking wine, flying, and just living in New York City. And finally, we will talk about consumer products that contribute a very small fraction of natural radiation; with the exception of one product--radioactivity from cigarettes--which can contribute radiation to the linings of our lungs at a level about four times that of the remaining natural radiation!
What kinds of radiation are we talking about?
During everyday life we are exposed to a variety of different types of “natural” radiations; that is in contrast to our “obligatory” exposure to man-made radiations, such as radioactive fallout from nuclear weapons testing (which largely ended in the 1960s) and exposure to medical sources of radiation, such as diagnostic radiology, nuclear medicine, and radiotherapy.
Sources of natural radiation
Terrestrial radiation emanates from inside the earth--literally under our feet--as well as from soil, sand, and stone-based building materials that surround us. It consists of primarily two types of radiation: gamma rays and alpha particles.
Gamma rays mostly originate from primordial uranium and thorium (ultra heavy radioactive elements) that were originally created and trapped in our planet at the time of the big-bang. Subsequently, the uranium and thorium have decayed into many other radioactive elements, but in terms of the sources of terrestrial radiation, the main ones are radium-226 and radon-222, that radium-226 decays into (see the second picture above, which shows the "decay chain" of uranium-238 into non-radioactive lead-206; the chain passes through radium-226 and radon-222). It is the subsequent radioactive decay of radium-226 in the soil and its derivative building materials that exposes us to gamma rays. Radon-222 is a gas that oozes (good scientific term!) out of anywhere that radium-226 is present, that we are then forced to breath and that exposes us internally to its decay radiation, which consists of alpha particles. Because radon-222 constitutes such a major percentage of our annual natural exposure (55%), we will spend some additional time talking about it and treat it separately from radium-226.
Radon gas inhalation
Radon-222 is formed as one intermediate step in the normal radioactive decay chains through which the primordial isotopes thorium and uranium slowly decay into lead. The picture at the top of this article of one of the primordial uranium decay chains shows where along the decay chain radium-226 and radon-222 are situated. Radon-222 is particularly hazardous as a component of natural radiation because it decays by emitting alpha particles. Radiation dose delivered by alpha particles is deemed to be 10 times more harmful than radiation dose delivered by x-rays or gamma rays. The reason for this is explained below.
Whereas x-rays and gamma rays are termed “sparsely ionizing” radiation, alpha particles are termed “densely ionizing” radiation. X-rays and gamma rays mostly cause DNA single-strand breaks, which are randomly distributed along the DNA molecule and separated by quite long distances, relatively speaking. Since the DNA molecule is composed of two tightly bound spiral strands, it is unlikely that single-strand breaks will occur exactly opposite each other, thereby causing the DNA to break apart. Individual single-strand breaks can be quite easily repaired by specialized enzymes within time periods of about 2-4 hours.
In contrast, alpha particles also cause single-strand breaks in the DNA, but at much shorter intervals, so it is therefore much more likely that two such single-strand breaks occur opposite each other, causing the DNA molecule to break apart. These are called “double-strand breaks” and are much more difficult, if not impossible, to repair.
In addition to this, alpha particles have a very much shorter penetration depth in tissue than x-rays or gamma rays (in fact about 1,000 times shorter). Therefore, all the energy of an alpha particle is delivered to only a few layers of cells, whereas the same energy delivered by x-rays or gamma-rays is diluted over a much larger volume of many millions of cells.
A useful analogy is a watering hose used to water flowers. When the nozzle is set to “jet”, the water applies much more pressure as it hits the flowers than when it is set to “spray”—even though in both cases the volume of water being delivered (equivalent to the energies of the x-rays, gamma rays, and alpha particles) may be the same. This occurs because the water is concentrated into a small area in the case of the “jet” and a large area in the case of the “spray”.
These two differences between sparsely and densely ionizing radiations accounts for most of the increased damage for the same dose that alpha particles cause to tissues and organs relative to x-rays and gamma rays. The factor used to account for this difference in damage is called the quality factor of the radiation. X-rays and gamma rays are defined as having a quality factor of 1. Relative to this, alpha particles are defined as having a quality factor of 10.
That is why radon gas is especially hazardous when inhaled. When the surfaces of our body are exposed to radon gas, there is no risk, because the range of the alpha particles is so short that it cannot even penetrate the surface layers of our skin. Numerous studies have shown an association between radon gas exposure and lung cancer, which is why radon gas is of such concern as a component of natural radiation. The pie-chart in fig. 1. shows that radon gas constitutes, on average, about 55% of the entire natural annual effective dose we receive. Effective dose, explained a little farther on, incorporates the quality factor of the radiation.
As in the case of gamma ray exposure from radium, which varies greatly depending on the local geological conditions, exposure to alpha particles from inhaled radon gas varies correspondingly with how much radium is present in the soil and building materials, and also on how readily radon gas can diffuse into buildings and how effectively ventilation can remove it.
Typically, houses with basements are a much greater source of radon gas than houses without basements. Houses built of brick are a greater sources of radon gas than houses built of wood. And more modern “energy efficient” houses, with very insulating and weather-tight windows, are also a greater source of radon gas.
Cosmic radiation originates in our sun and partially penetrates our atmosphere, exposing us mainly to gamma rays and charged particles, largely protons. Cosmic radiation varies greatly depending on elevation; the higher the elevation we live at, the higher is the cosmic radiation component because there is less atmosphere to absorb the cosmic rays before they reach us. Pilots and cabin crew receive substantially more cosmic radiation exposure than the rest of us due to the hundreds of hours a year they spend at very high altitudes. In fact, among so-called radiation workers, pilots and cabin crew of long-haul aircraft receive close to the highest occupational radiation doses of any profession. The highest occupational doses are actually received by interventional radiologists working with specialized x-ray machines in hospitals.
As already mentioned, natural radiation varies depending on geographical location and elevation. For example, if we combine annual natural terrestrial and cosmic radiation doses, we find 48 mrem in New York City, 140 mrem in Denver, 300 mrem in Kerala (India), and 500 mrem in parts of the Brazilian coast.
Intuitively, one would expect to observe elevated cancer rates, for example, for the populations in Kerala vs. New York City. In fact, such an association is generally not observed, leading to the supposition that for protracted, low-dose radiation exposures, cancer induction may not be elevated, but may in fact be decreased.
Such an inverse relationship between protracted low-dose radiation exposure and cancer induction is also observed in other situations. Neighboring provinces in Mainland China that are culturally identical, but due to geological factors have natural radiation levels differing by almost a factor of three are observed to have a higher cancer rate in the province with the lower natural radiation levels. In the U.K., radiation workers experienced a significantly reduced cancer rate compared to workers in other industries after statistically confounding factors had been accounted for.
Data such as that point to the possible existence of radiation hormesis, which is the phenomenon when increased radiation causes a reduction in observed cancer rates. Radiation hormesis is a topic in radiation biology that has gained some traction after “coming out of the scientific closet” about 15 years or so ago. Although the mechanisms that result in radiation hormesis are still not entirely understood, they are gradually being elucidated.
Internal radiation from naturally occurring radioisotopes originates from inside our bodies and is due mainly to naturally occurring radioactive potassium-40 and carbon-13. Potassium and carbon constitute, respectively, important components of intracellular composition and a large fraction of virtually every tissue and chemical structure in our body. The potassium-40 exposes our bodies to gamma rays, while the carbon-13 exposes our bodies to electrons, often called beta rays. Potassium-40 and carbon-13 are a component of natural radiation that is totally unavoidable.
An interesting byline: On average, men have about 20% more muscle mass than women, so their exposure from their own internal radiation is roughly 20% higher than that of women. Furthermore, since the potassium-40 decays in our bodies producing gamma rays, which are quite penetrating and in fact to some degree escape from our bodies, if we routinely sleep with a partner our natural radiation exposure from potassium-40 is a few percent higher than if we remain celibate, because the escaping gamma rays from the internal potassium-40 in one partner will slightly increase the radiation doses that the other partner receives.
Medical Radiation exposes our bodies to radiation, mainly due to diagnostic x-rays and nuclear medicine procedures. Medical radiation doses dropped significantly in the 1940s with the introduction of screen-film technology, but have gradually risen again due to the increasing dependence on and utilization of high technology x-ray diagnostic systems, in particular CT scanners. Obviously there are some individuals who do not get x-ray exams, while others may get many, so the medical component of natural radiation actually varies tremendously and can only be considered in average terms.
Another article in my Dr. Simple Science blog, titled “Do CT scans kill patients?” goes into this topic a little more deeply.
Summary of natural radiation sources
The pie-chart in fig. 1. below summarizes the sources of natural radiations (including medical radiation) that we are exposed to, showing an average breakdown by annual effective dose and by percentage. The pie chart also shows the contribution of natural radiation from “consumer products”, which will be discussed in more detail later on.
Fig. 1. Pie chart of annual natural and medical effective doses received by the average individual in the U.S.
The doses shown in the pie-chart in fig. 1. are expressed as average “effective doses” . The concept of effective dose will be explained shortly.
To summarize, natural radiation is composed of five main categories, as listed in table 2 below:
Table 2. The 5 main categories contributing to total annual natural radiation exposure in units of annual mrem (divide by 100 to convert to mSieverts) of "effective dose"
CATEGORY ANNUAL EFFECTIVE DOSE (mrem) PERCENT OF EFFECTIVE DOSE
TERRESTIAL 28 8%
COSMIC 27 7%
INTERNAL 39 11%
MEDICAL 53 15%
INHALED RADON GAS 200 55%
REMAINDER 11 3%
TOTAL: 360 mrem (3.6 mSv)/year
But what is effective dose? – permit me a slight digression into physics geek-dome (in blue text).
The concept of effective dose is used extensively when assessing the risks of radiation exposures to individuals and populations.
As an example of how the effective dose concept is useful in the case of individuals, assume you have a chest x-ray. The x-rays will expose many organs in your body to varying doses: skin, lungs, heart, bone, bone marrow, etc. Furthermore, these exposed organs in your body will have different sensitivities for developing cancer from those x-ray doses (for example, the same radiation dose delivered to your arm as opposed to your bone marrow would be far less likely to produce a radiation-induced cancer).
These varying sensitivities to radiation are expressed mathematically by the concept of tissue weighting factors in the effective dose calculation method. Similarly, if the same radiation dose were delivered to only half of the bone marrow, it is assumed it would be 50% less likely to cause a radiation-induced cancer than if that same radiation dose were delivered to the whole bone marrow.
Finally, the type of radiation is included in the effective dose calculation as a quality factor. X-rays and gamma-rays are defined as having a quality factor of 1. Other radiations, such as neutrons or alpha particles, have quality factors higher than one, reflecting the greater amount of biological damage they produce for the same dose.
So to calculate effective dose, we determine, 1) which organs receive what doses of radiation; 2) what fractions of each of these organs is exposed; 3) the sensitivity each exposed organ has for developing radiation-induced cancer, i.e., the corresponding published tissue weighting factors; and 4) the quality factor of the radiation. We then sum up all the weighted doses and the result gives us the total effective dose.
The beauty of the effective dose approach is that no matter how a particular radiation exposes an individual, i.e., from which direction, and with what width beam and shape, etc., the effective dose calculation takes into account all of these factors. The total effective dose can then be plugged into an effective dose vs. cancer risk model (such as the commonly used linear-no-threshold (LNT) model) to yield the final probability that an individual will develop a radiation-induced cancer from that diagnostic exam.
For the chest x-ray we postulated in this example, the total effective dose would numerically be approximately 5 times lower than the actual skin dose delivered within the boundaries of the x-ray beam. Correspondingly, the risk for an individual developing cancer from this chest x-ray would theoretically be 5 times lower than if the old-fashioned skin dose assessment were employed. The simple skin dose approach was utilized for assessing radiation risks from diagnostic x-ray procedures until about 15 years ago, but proved notoriously inaccurate in predicting cancer risk.
Here is an example of how misunderstood effective dose is by those who really should understand it. The TSA at U.S. airports is in charge of the operation of threat detection devices such as x-ray backscatter body scanners. Following a request by a congressional committee, the TSA recently released the effective doses delivered to subjects undergoing x-ray backscatter body scans. However, following this disclosure the TSA was vehemently criticized by certain (lay) watchdog groups because, they claimed, the TSA when stating the doses delivered by these devices only gave the effective dose, whereas the dose to the subject’s skin was far higher. This is where the misunderstanding arises; yes, the dose to the skin is indeed be far higher numerically than the effective dose, but what was not understood by the watchdog groups, or the congressional committee, was that the higher skin dose was completely accounted for in the formalism for calculating effective dose, as we demonstrated in the example above of an individual receiving a chest x-ray. The TSA was completely correct in reporting the cancer risk based on the effective dose and not on the skin dose.
How are data needed to perform effective dose calculations obtained?
In order to calculate effective dose, we first have to use a theoretical calculation method called Monte Carlo Simulation. This calculation gives us the distribution of the dose from a specific radiation exposure of the subject on an organ-by-organ basis. The calculation uses realistic mathematical human models of varying sizes and shapes. The Monte Carlo calculation provides the values of all full and partial organ doses, from which the effective dose can be calculated as described above.
We have reviewed the kinds of natural radiations that every one of us is exposed to whether they want to or not. But what is the absolute cancer risk that the effective doses we calculate—say, for a chest x-ray—actually produce? The explanation of how we take this next step is beyond the scope of this article. Another article in my Dr Simple Science blog, titled “How Dangerous is Radiation to Humans—or is it?” explains how effective doses are converted to absolute cancer risk, as well as the many pitfalls of this process. So let us assume that these absolute cancer risks from radiation have been determined, and talk about equivalent risks from various non-radiation-related human activities
Risks of non-radiation-related human activities compared to risks from radiation
Without explaining how the following information was obtained (which is discussed in another of my Dr. Simple Science articles, titled “How Dangerous is Radiation to Humans—or is it?), let us assume that the probability of a single individual getting a fatal radiation-induced cancer due to an effective dose of 1 rem (roughly equivalent, for example, to a CT scan of the pelvis), is 0.05%.
The corresponding non-radiation-related activities that carry the same actuarially determined risk of death are listed in table 3 below.
Table 3. Non-radiation-related activities that carry the same actuarially determined risk of death as a single CT scan delivering an effective dose of 1 rem (10 mSv)
BEHAVIOR / SITUATION CAUSE OF DEATH
1 rem (10 mSv) effective dose Induced cancer from radiation
(assuming the LNT hypothesis)
Smoking 28 packs of cigarettes Cancer, heart disease
Drinking 200 liters of wine Cirrhosis of the liver
Spending 400 hours in a coal mine Black lung disease
Spending 1,200 hours in a coal mine Accident
Living 2 years in New York or Boston Air pollution
Traveling 40 hours by canoe Accident
Traveling 70 hours by bicycle Accident
Traveling 20,000 miles by car Accident
Flying 400,000 miles by jet Accident
Flying 600,000 miles by jet Cancer from cosmic radiation
Living 7 years in Denver Cancer from cosmic radiation
Living 17 years in a stone/brick building Cancer from natural radiation
500 chest x-rays or 20 mammograms Cancer from radiation
Living 33 years with a cigarette smoker Cancer and heart disease
Eating 1,600 tablespoons of peanut butter Liver cancer from aflatoxin-B
Living 500 years at the boundary of a nuclear Cancer from radiation
Drinking Miami water for 400 years Cancer from chloroform exposure
Eating 40,000 charcoal broiled steaks Cancer from benzopyrene exposure
Included in the pie-chart in fig. 1. are slices of pie that are not part of naturally occurring radiation exposure. For example, there is a slice that corresponds to radiation from consumer products. Although, as already said, the latter is not part of natural radiation exposure (after all, we are not forced to use such products), it is quite interesting to consider which consumer products in fact do produce radiation exposure—however large or small.
Radiation exposure from consumer products
In addition to being subject to natural radiation as is the population at large, cigarette smokers receive, on average, an annual effective dose of about 1,300 mrem. No, that is not a typographical error. 1,300 mrem (13 mSv)! This is about 4 times the true natural effective dose of 360 mrem (3.6 mSv) per year. The reason for this perhaps puzzling fact is that the tobacco plant contains two naturally occurring radioisotopes, polonium-210 and lead-210. These radioisotopes actually originate in the fertilizer that is used in the growing of the tobacco plant. Subsequently, these two radioisotopes become trapped in tobacco smoke particles that are inhaled by smokers. Although the absolute concentration of the polonium-210 and lead-210 in tobacco smoke is very low, the tar in cigarette smoke traps the smoke particles in small passages in the lungs called bronchioles so that the polonium-210 and lead-210 remain in contact with the walls of the bronchioles for extended periods of time causing substantial radiation dose to be delivered to the cells of the bronchial walls. Furthermore, both polonium-210 and lead-210 are alpha particle emitters, and we have already explained why alpha emitting radioisotopes have a relative effectiveness for producing cancer that is 10 times higher than that of x-rays or gamma-rays.
The average annual actual dose to the lining of the bronchioles of the average cigarette smoker (in contrast to the 1,300 mrem effective dose) is in fact about 10,000--11,000 mrem (100--110 mSv)! Since it is impossible to remove the polonium-210 and lead-210 from tobacco, it is not clear what proportion of the greatly elevated cancer rate observed in cigarette smokers is due to the chemicals present in tobacco and what proportion is due to the associated high radiation dose.
Most residential smoke detectors contain americium-241 radioisotope sources. Americium-241 is an alpha particle emitting radioisotope. With proper use, no significant radiation is measurable outside the unit, but if the smoke detector is trashed, the americium-241 source can fracture and surrounding objects may be contaminated, leading to the possibility of human contamination.
Luminous watches and clocks
Modern watches and clocks sometimes use small quantity of the radioisotope hydrogen-3 (called tritium) or promethium-147 as a source of luminosity. Some older watches and clocks, right up to the 1960s, used radium-226 as a source of luminosity. As mentioned earlier, radium-226 radioactively decays by producing radon gas, which could be inhaled. Furthermore, if these watches or clocks are trashed, the radium-226 would contaminate surrounding objects and contaminate anyone handling these objects. In the early part of the 20th century, workers who painted the numerals on the dials of luminous watches and clocks were known to lick the ends of their paint brushes to make a nice sharp point; but in doing so, they absorbed dangerous amounts of radium-226, and many of them developed cancers as a result.
Ceramic materials, such as tiles and pottery, and in particular orange-colored Fiesta-Ware, often contain elevated levels of naturally occurring radioactive uranium, thorium-232, and/or potassium-40. In most cases, the radioactivity is concentrated in the glaze. While it is less common than it once was, some brands of lantern mantles incorporate thorium-232. In fact, it is the heating of the thorium by the burning gas or liquid that is responsible for the emission of light. Such mantles are sufficiently radioactive that when discarded they are often used as check sources for radiation survey meters.
Glassware, especially antique glassware with a yellow or greenish color, can contain detectable quantities of uranium. Such uranium-containing glass is often referred to as canary or vaseline glass. Collectors are also attracted to uranium glass for the attractive glow that is produced when the glass is exposed to black (ultraviolet) light. Even ordinary glass can contain high enough levels of potassium-40 or thorium-232 to be detectable. Older camera lenses (1950s-1970s) often had coatings of thorium-232 to alter the index of refraction.
Antique radioactive “curative” devices
In the past, primarily 1920s through the 1950s, a wide range of radioactive products were sold as curative devices. For example, radium-containing pills, pads, solutions, and devices designed to add radon to drinking water. Most such devices were relatively harmless (as well as being useless), but occasionally one can be encountered that contains potentially hazardous levels of radium-226.
Commercial fertilizers are designed to provide varying levels of potassium, phosphorous, and nitrogen. Such fertilizers can be measurably radioactive for two reasons: potassium is naturally radioactive due to its radioisotope potassium-40 (as explained earlier in the section on internal natural radiation), while the phosphorous can be derived from phosphate ore that contains elevated levels of uranium-238 and radium-226. The radioactivity of fertilizers is most important due to the polonium-210 and lead-210 that is transferred to plants, in particular the tobacco plant, which (as explained earlier) can be highly hazardous to smokers.
Food contains a variety of different naturally occurring radioactive materials. Although the relatively small quantities of food in a home at any one time contain too little radioactivity to be readily detectable or hazardous, bulk shipments of food have been known to set off the sensitive radiation monitors at border crossings. One exception would be low-sodium salt substitutes that often contain enough potassium-40 to double the level of natural radioactivity.
Beautiful granite countertops are radioactive to a small extent. The granite continually releases radon-222 gas into the air due to the presence of radium-226 in the granite. Although the amount released can vary considerably from one type of granite to another, the radon concentrations in most kitchens tested are much less than the EPA's "safe" limit of 4 picoCi/liter. While the radioactive material in the granite can produce a reading on a sensitive radiation detection instrument, the levels of radiation are well below the level that would result in any harm to humans; so don't replace your granite countertops on account of the minuscule quantities of radon produced, but develop a geeky party patter to tell others about it!
Long-haul airline flights
Long-haul airline flights cause the passengers and cabin staff to incur higher cosmic radiation doses than the rest of us who remain on terra firma. This occurs for two reasons: first, at cruising altitudes of 30,000-40,000 ft, there is very little atmosphere left to shield the traveller from cosmic rays; second, much of the cosmic rays are deflected by the earth’s magnetic field before they reach ground level, but near the poles the magnetic fields are in an unfavorable orientation to provide optimum deflection of cosmic rays, and since long-haul flights often cross over the poles to exploit the shorter distance of the great circle routes, the pilots, crew, and passengers get double-indemnity with regard to increased radiation levels.
The lowest dose-rates measured during a long-haul flight are approximately 0.3 mrem/hour (3 µSv/hour) during a Paris-Buenos Aires flight totaling approximately 13 hours, resulting in a round-trip additional effective dose of 8 mrem (80 µSv).
The highest dose-rates measured during long-haul flights are approximately 0.66 mrem/hour (6.6 µSv/hour) on Paris-Tokyo flights totaling approximately 12 hours, and 1 mrem/hour (9.7 µSv/hour) on the same route in the Concorde, totaling approximately 4 hours. For the Paris-Tokyo flight, this would result in a round-trip additional effective dose of 16 mrem (160µSv); or 8 mrem (80µSv) in the Concord.
For long-haul pilots, who typically fly 700-1,000 hours a year, these additional cosmic ray exposures could add roughly 400-600 mrem/year (4-6 mSv/year) to their “ground-based” natural exposure average of 360 mrem/year (3.6 milliSv/year).
[Adapted from reference: https://www.hps.org/publicinformation/ate/faqs/commercialflights.html]
In this article we have discussed the sources of natural radiation which we are all exposed to whether we choose to be or not. These are, terrestrial, cosmic, internal, medical, & radon inhalation. Percentage-wise, these constitute, respectively, 8% (terrestrial), 7% (cosmic), 11% (internal), 15%, (medical) and 55% (radon inhalation), for a total annual effective dose of 360 mrem (3.6 mSv).
Expressing doses as effective doses does away with any need to make corrections for radiation type or exposure geometry and other exposure conditions; effective doses can then be directly plugged into radiation dose vs. cancer risk models such as the frequently used linear-no-threshold (LNT) model.
The substantial annual radon inhalation component (55%) in the natural effective dose is due to a number of factors, but most importantly to the radiobiological properties of radon-222 as an alpha particle emitter and the associated high quality factor of 10.
Long-haul pilots, crew, and passengers are typically exposed to additional effective doses of approximately 0.5 mrem/hr (5µSv/hr), due mainly to the reduced protection against cosmic rays from the decreased atmospheric protection at high cruising altitudes and reduced protection from the earth’s magnetic field on great-circle routes over the poles.
Many consumer products are manufactured with radioisotopes of various kinds as necessary components of the product. During normal use, these consumer products are completely safe, but following their disposal there is frequently the possibility of hazardous contamination.
Tobacco smokers, on average, incur an additional annual effective dose of 1,300 mrem (13 mSv), due to polonium-210 and lead-210, both alpha-particle emitting radioisotopes, that are absorbed from fertilizer by the tobacco plant and become trapped in bronchioles by the tar in the cigarette smoke. This maintains these radionuclides in intimate contact with the lining cells of the bronchioles for extended periods of time, and with the high quality factor of 10 for alpha particles, produces actual doses as high as 10,000—11,000 mrem (100-110 mSv). But due to the inability to remove the polonium-210 and lead-210 from fertilizer, it is impossible to conclude whether the high incidence of cardiovascular disease and lung cancer observed in smokers is due to the presence of polonium-210 and lead-210 or to other factors connected with smoke inhalation.
Some radiobiological studies have shown decreased incidence of cancer with increasing effective dose in the range of 0 - 10 rem (0 - 0.1 Sv). This inverse relationship is called radiation hormesis. The scientific facts explaining radiation hormesis are being gradually elucidated, and it appears that the existence of a radiation hormetic effect in the range of typical diagnostic x-ray doses is real.
An implementation of the x-ray backscatter technique that is a modification of the one described in this blog has been developed by American Science and Engineering. Rather than designed for detecting threat objects on airline travelers, which this narrative is focused on, this new implementation is scaled-up to examine the contents of cargo containers; for example, shipping containers and trucks. The backscatter system is scaled-up by using much higher energy x-rays, plus other modifications, to enable the x-rays to penetrate into much larger objects. Instead of conventional "diagnostic" x-ray machines, this device uses linear accelerators, the same technology that is used for radiation treatment of cancer, and which produces x-rays of approximately 10-20 times higher energy than the airport threat detection devices described in this blog. The picture above shows a truck that attempted to cross the border to the United States. A simple physical examination suggested that the truck contained bananas. However, the center of the cargo area contained a compartment within which 20 or so illegal aliens were attempting to cross the border undetected. The x-ray backscatter image showed the human cargo very clearly.
As Northwest Flight 253 made its final approach to Detroit airport on Christmas Day 2009, a terrorist carrying an unusual form of plastic explosive almost succeeded in killing its 300 passengers and crew.
Because of that incident, U.S. and worldwide airport security efforts were rapidly ramped up. One approach was the installation of so-called body scanners at airports. What are these devices, how do they work, are they effective, and are they safe?
Principles of Operation of Airport Scanners
The body scanners deployed in airports in the U.S. and Europe generally use either x-ray transmission imaging or x-ray backscatter imaging. Over 1,500 of them are now deployed in U.S. airports, with the number rapidly growing.
X-Ray Transmission Imaging
X-ray transmission imaging is probably the most familiar form of x-ray imaging—widely used in medicine as well as for many security applications. It is the type of imaging that produces the familiar chest x-ray. X-ray transmission imaging is effective for detecting guns, bombs, and other threat objects that are made of dense metallic materials. However, plastic explosives, such as Semtex and C4 (frequently used by terrorists), or drugs, are poorly depicted by transmission imaging since they have similar x-ray properties to biological tissue and consequently are poorly visualized against the background image of the body.
Backscatter imaging involves sending a narrow x-ray beam into the body and detecting only those x-rays that scatter in the backward or sideways directions from tissues and threat objects that reside within the superficial 1-2 inches of the body. The x-ray beam is rapidly scanned and the position of the beam on the body at any moment in time is accurately known. The total scattered x-ray signal from the detectors at that same moment in time correlates with the backscattering property of the tissues and/or other objects over an area equivalent to the diameter as the x-ray beam—I.e., approximately 1-2 mm—and a depth up to about 2 inches.
Backscatter imaging has a number of advantages for security applications. Because backscattered x-rays need to pass through only a few inches of the body–-an inch or two on their way in, then an inch or two on their way out–-fewer x-rays are needed than in transmission imaging, where the x-rays have to penetrate through the entire thickness of the body before they can be detected. Consequently, the radiation dose to the body in backscatter imaging is also more than 100x lower than in transmission imaging. In addition, because backscatter detectors can be made very much larger in capture area than transmission detectors—perhaps 1,000x larger--this further reduces the necessary radiation dose to the body by approximately 1000x.
Advantages of Backscatter Imaging for Threat Detection
For equal densities, plastics, plastic explosives, drugs, and soft tissues of the body produce more backscattered x-ray signal than metals, so the former are more clearly depicted in backscatter than in x-ray transmission imaging. The reverse is true in x-ray transmission imaging. For example: Certain weapons, such as some models of the German Glock handgun, are manufactured with a large amount of plastic material to reduce weight. Many models of Glock handguns have plastic hand grips, which are very difficult to see in transmission images but are clearly depicted in backscatter images. Similarly, plastic explosives, even when located among a complicated background of metallic objects, can be clearly seen in backscatter images but are essentially invisible in transmission images.
Possible Health Risks
What are the health risks from x-ray backscatter imaging body scanners? The effective radiation dose from one x-ray backscatter body scan is equal to about 11 nano-Sievert (0.0011 mrem in old-fashioned units). This is equivalent to about 3-4 minutes of natural background dose; in other words, a traveller standing in line for a backscatter body scan would probably receive more radiation dose from natural background radiation than from the scan itself! Therefore, it would require a traveler to have approximately 455,000 body scans in one year to reach the 5 milli-Sievert (500 mrem) annual radiation dose limit set by U.S. Federal government and State regulations for the general public.
Most ionizing radiation generating technologies are designed with the 5 milli-Sievert (500 mrem) annual dose limit to the general public in mind. For example: patients passing through hospital corridors that happen to be adjacent to x-ray rooms, people living near the boundaries of nuclear power plants, strangers being in proximity to patients being treated with radioiodine for thyroid disease, etc., receive radiation doses that are limited under federal and state regulations to a maximum of 0.02 milli-Sievert (2 milli-rem) in any one hour. This number is mathematically linked to the maximum annual dose limits for the general public referred to above.
Put another way, based on the linear-no-threshold (LNT) model used widely for radiation risk assessment, the risk of getting a fatal cancer from one backscatter scan is approximately that of eventually dying from pollution by living in New York City for 1 minute, traveling 100 ft by car, or traveling 1 mile by jet. It would require about 90 backscatter scans to be equivalent in effective dose to one chest x-ray.
A frequent criticism of TSA’s characterization of the radiation doses delivered by x-ray backscatter scanners is that the skin receives much larger doses than the quoted values for total effective dose, since most of the radiation dose is concentrated in the skin. However, TSA’s characterization of the dose from an x-ray backscatter scan is based on the concept of effective dose, a construct used very frequently in epidemiological radiation studies. Under that concept, the risk is calculated from the partial doses received by all organs (including skin), and the corresponding “effective dose” in a form suited to LNT dose calculations is finally calculated. The definition of effective dose, therefore, already takes into account the variable doses delivered to different tissues and organs, taking into account their varying radiation sensitivities.
Non-X-Ray Threat Detection Devices: T-Wave Scanners
An alternate form of threat detection that has been recently developed for use in airports is called “millimeter wave” or “T-wave” body scanning. Instead of using x-rays, this technology use extremely high frequency radio waves in the “terahertz” range—sometimes called T-waves—that are beamed into the traveler’s body and are then differentially reflected by any additional materials that may be concealed in or on it. Although T-wave scanners produce no radiation exposure whatsoever, there are studies showing that T-wave scanners are substantially less accurate in detecting threat objects than x-ray backscatter scanners. But from a public perception perspective, T-wave scanners are an important advance in threat detection technology because of the lack of radiation dose to the traveler.
What about the issue of privacy related to body scanning for threat detection? Indeed, in addition to producing clear images of threat objects, x-ray backscatter scanning is able to provide clear images of the surface of the traveler’s body, together with quite clear depiction of his or her “private parts”. TSA claims that there are various solutions that can “depersonalize” such images. For example: images showing private parts can be automatically blurred prior to display (either the private parts or the facial features of the traveler can be blurred) and TSA staff who examine these images are located in a separate room so that they see only the images and not the individuals being scanned. There are also software applications that automatically search for and flag threat objects in an x-ray backscatter image and only display the full image to an operator if a threat object is identified. Despite these privacy maneuvers, TSA has been sued for violation of the 4th amendment, which has resulted in many backscatter scanners being removed from airports and replaced instead with T-wave scanners. This is unfortunate, since it provides much less protection against terrorist threat.
Backscatter x-ray imaging is a new technology that is aimed at detecting threat objects that would mostly not be visible using the more conventional x-ray transmission imaging approach, such as plastic explosives, drugs, and the non-metallic components of certain models of handguns.
The dose from x-ray backscatter scanning is extremely low; in fact it is virtually negligible. Based on the linear-no-threshold (LNT) model, used widely for radiation risk assessment, the risk of getting a fatal cancer from one backscatter scan is approximately that of eventually dying from pollution by living in New York City for 1 minute, traveling 100 ft by car, or traveling 1 mile by jet. It would require about 90 backscatter scans to be equivalent in effective dose to one chest x-ray. Even if someone travels frequently and receives backscatter scans during every security check, when such very low radiation doses are spread out over time their effect on the body is not linearly cumulative because the body quickly repairs minor X-ray damage when it is protracted in time. Backscatter scanning in tandem with x-ray transmission scanning (often combined in the same apparatus), appears to be the most significant development for reducing the terrorist threat at airports. However, privacy concerns and lawsuits based on 4th amendment issues have resulted in the deactivation of many backscatter scanners in the U.S. and Europe.
T-wave body scanners are an alternative technology recently developed for threat detection. T-wave scanners do not use ionizing radiation (such as x-rays), and produce no radiation dose to the subject. However, at the present time, although embraced by the public because of their total safety and lack of privacy violation (due to the poor quality of the images), T-wave scanners do not appear to have the necessary accuracy or sensitivity for adequate threat detection.
TSA has implemented various solutions to depersonalize x-ray backscatter images, that in addition to depicting threat objects clearly depict the “private parts” of a subject.
Kerala beach, India, which has one of the highest terrestrial background dose rates in the world due to the abundance of thorium-containing monoxite sand. Dose rates are 7 mSv/year (700 mrem/year) compared to average terrestrial dose rates in the U.S. of 0.3 mSv/year (30 mrem/year). Despite the Kerala terrestrial dose rate being approximately 20x higher than in other areas of India, epidemiological studies have not detected elevated cancer rates in residents of Kerala compared to the rest of India.
It has been estimated that about 10% of genetic mutations that occurred during the evolution of human life have been due to the influence of radiation. We are all exposed to natural background radiation on a daily basis: cosmic rays from outer space that interact with our atmosphere and shower us with various secondary radiations; gamma-rays produced by radioisotopes naturally present in the earth; radioactive radon gas that oozes out of the ground and enters our lungs; and two or three radioisotopes that reside naturally in our bodies. Since evolution is driven by genetic mutations, natural background radiation would not appear to be that bad for the development of the human race. However, genetic mutations are a two-edged sword: they help drive evolution according to the processes of natural selection by enhancing the selective survival of “desirable” genes, but they can also cause illnesses such as cancer. This article will consider the latter of these effects of radiation, i.e., those that are potentially detrimental to human health even though they may sometimes be unavoidable.
Radiations we are exposed to
Let’s review in more detail the physical nature of the radiations we are exposed to. These consist of two major categories: electromagnetic radiations, and particulate radiations. Electromagnetic radiations include x-rays and gamma-rays, while particulate radiations include alpha-particles, protons, neutrons, and electrons.
In addition to natural background radiation, referred to earlier, we are also exposed to man-made diagnostic radiations and therapeutic radiations, used widely in the medical area.
Radiations used for diagnosis of disease
Diagnostic x-ray machines take two-dimensional x-ray images of your chest and of many other body parts, whereas highly complex and sophisticated x-ray machines such as CT scanners produce x-ray images of your body in the form of thin (1-4 mm) thick slices, eliminating the problems of tissue overlap that inhibit accurate diagnosis in the plain-and-simple two-dimensional type of x-ray imaging.
Radiations used for therapeutic treatment of disease
Linear accelerators, descendants of radiation-generating equipment used for many decades in physics research, produce very high-energy x-ray and electron beams that are used to treat cancer. Electron beams lack the high “aiming” accuracy possessed by x-rays and gamma-rays, but they weaken and disappear very rapidly at depths beyond the boundaries of relatively shallow tumors, thereby protecting radio-sensitive tissues or organs that may be located downstream of the tumor.
Gamma-rays, produced by radioisotopes encapsulated in rice-sized metallic “seeds” that are inserted into some types of tumor, are often used to treat cancers such as prostate or breast cancer from inside the body, where tumors may be surrounded by particularly radiation-sensitive tissues or organs. A machine called the Gamma-knife uses gamma rays to treat primary brain tumors from outside the body, as well as tumors that have spread (metastasized) to the brain from cancers at other anatomical sites.
The use of protons beams is a relatively new development in radiation therapy. There are currently about 35 sites in the U.S. that offer this form of radiation therapy, largely due to the extraordinary high cost of such facilities: typically $100m-$150m (although more recently developed “single-room” proton facilities are less costly). At the present time, proton beams are most commonly used for treating prostate cancer in adults and brain and spinal cord tumors in children, although other types of cancer are treated as well. Some cancers are difficult to treat with x-rays or gamma-rays because tumors may be surrounded by especially radiosensitive tissues or organs, limiting the amount of radiation that can be delivered to the tumors themselves. To a large extent, proton beams sidestep this problem. Unlike x-rays and gamma-rays, which penetrate the entire body and in doing so deliver potentially damaging radiation to healthy tissues downstream of the tumor location, when proton beams reach their target they abruptly stop, completely avoiding the downstream radiation exposure problem. Electron beams also stop after reaching a specific depth—although not nearly as abruptly as proton beams—but they also spread out laterally, whereas proton beams have exquisite aiming accuracy, both in depth and in lack of lateral spread, so enormously reducing radiation exposure of healthy tissues.
About two-thirds of the naturally occurring background radiation dose we are exposed to on a daily basis consists of alpha particles. Naturally occurring alpha particles are produced by radon gas (radon gas is the decay product or “daughter” of radium which resides in the superficial layers of our planet) that we absorb into our lungs with every breath we take. Radon gas, which is highly radioactive, oozes out of the ground and building materials mixing with surrounding air, so breathing it into our lungs is unavoidable. Alpha particles have a very short range in tissue—just a few cell diameters—so when inside the lungs they dump all their energy in the very thin layer of epithelial cells lining the lungs. The large mass, high electrical charge, exceptional ability to produce un-repairable biological damage, and the short-range of alpha particles produces more “bang for the buck” in terms of radiation damage than any other type of radiation.
How Can Radiation Cause Cancer?
Now that we’ve reviewed the nature of the radiations that we are exposed to, let’s think about how these radiations could cause cancer. Cancer, as far as we know, is the result of genetic mis-programming caused by mutations in our DNA. Such harmful mutations can occur naturally due to random processes, or they can be caused by external environmental factors such as chemicals or radiation.
Epidemiological Evidence for Radiation Risk to Humans
The evidence that we have on the relationship between human radiation exposure and cancer comes from man-made radiation sources to which humans have been inadvertently exposed. These include the Hiroshima and Nagasaki atomic bombs, irradiation of the spine that many decades ago was a standard treatment for a congenital disease called ankylosing spondylitis, and irradiation of the female breasts in tuberculosis sanatoria, mainly in Massachusetts and Canada, where a standard therapeutic approach was to deflate and re-inflate the lungs under x-ray fluoroscopic guidance. The theory at the time was that this maneuver would deprive the tuberculosis-causing microorganism of oxygen; but today we know that the responsible microorganism is anaerobic, i.e., does not require oxygen to survive and multiply. The x-ray fluoroscopic equipment in those early days produced hundreds of times more radiation dose to patients than modern fluoroscopic equipment, and because these patients received lung deflation and re-inflation under fluoroscopic guidance on a monthly basis, resulted in massive amounts of radiation dose being delivered to the breasts which, in turn, resulted in a measurably increased rate of breast cancer.
Derivation of Radiation Risk Models from Historical Radiation Effects Data
Statisticians working for U.S. and European organizations such as the NCRP, ICRP, ICRU, and the ABCC got hold of the above-mentioned data and drew a straight line, originating at zero radiation (where, presumably, zero additional cancers were caused) and passing through the average of the very scattered data points relating radiation dose to the incidence of cancer. This straight line is referred to as the “linear no-threshold radiation effects model”, or the “LNT model”. Since there are no actual data points at the relatively low radiation doses that are characteristic of natural background radiation and diagnostic x-ray doses, the LNT model is only a theoretical predictor of what additional cancer cases might be expected at these lower radiation levels. The gradient of this LNT line provides the only relationship we have linking radiation dose to cancer incidence and cancer death. For example, the number on the right in the table below shows the slope of the LNT line relating additional (excess) cancer deaths to radiation dose, in units of lifetime excess cancer deaths per 10,000 members of the general public exposed one time to 1 rem (1 Sievert in modern units) of radiation; we will not differentiate between the units of rem and rad or Gray and Sievert for the purposes of this discussion.
*Exposure received only after age 18 years. Data are weighted averages; i.e., the older you are at the time of the single exposure, the lower your risk, since you have less years left for the effect to express itself. The actual risk values change a little from report to report but are basically as shown.
This means that if 10,000 members of the general public were exposed to 1 rem of radiation dose, then within the remaining lifetimes of these individuals, 5 would contract fatal cancers statistically caused by that 1 rem of radiation dose. Now one can argue that those 5 cases of fatal cancer would equally likely to have been caused by natural background radiation or by non-radiation carcinogens such as chemicals. This is a totally logical assumption, and is the reason why it is very difficult to establish in tort law that a certain radiation dose caused a specific individual to die of cancer.
However, one can advance an epidemiological argument that each of the people exposed to 1 rem of radiation would have a probability of contracting a fatal cancer from that 1 rem of radiation dose that is (1 X 5 / 10,000) x 100, or 0.05%. Now the natural fatal cancer rate among the human population in the U.S. and Europe is approximately 20%, which means that 1 rem of additional radiation dose raises that probability to 20 + 0.05, or 20.05%. Expressed from that perspective, 1 rem hardly seems like a dose to be enormously concerned about.
Computation of Risk Estimates Using the Standard LNT Radiation Risk Model
The final piece of the puzzle we need to consider is the actual radiation doses produced by various sources of radiation that we can plug into the LNT equation and assess the corresponding risk.
Consider two illustrative cases
1) The doses from diagnostic radiology procedures that use x-rays range very roughly from 1 milli-rem (for a standard chest x-ray) to 1,000 milli-rem (for a typical CT scan); or, in modern units, 10 micro-Sievert to 10 milli-Sievert. Using the LNT gradient parameter “5.0” from the table above, for 10,000 exposed people, this dose range corresponds to between (5 x 0.001) and (5 x 1), or 0.005 to 5 additional fatal cancers per each x-ray procedure per 10,000 exposed people. Expressed as a percentage probability for a single individual, this corresponds to [0.005 x 100] / [10,000] to [5 x 100] / [10,000] = 0.00005% to 0.05%.
2) If the radiation dose received by 10,000 members of the general public were only due to one year’s worth of natural background radiation (which averages around the U.S. to roughly 300 mrem/year; or 3 milli-Sieverts/year), the number of additional fatal cancers caused in that population of 10,000 people would be 5 x 0.300, or 1.5 additional cancers/year for each year that each person was exposed to background radiation, or equivalently [5 X 0.3 X 100 / 10,000] = 0.015%. So since we are continuously exposed to background radiation, a 50 year-old individual exposed to natural background radiation from birth would have a [50 x 0.015%] = 0.75% likelihood of contracting a fatal cancer from natural background radiation.
One can conclude, therefore, that for an individual person the risk of fatal cancer induction due to radiation, either from natural background or from diagnostic x-ray procedures, is still very small compared to the baseline fatal cancer incidence rate of 20%.
Some Limitations of the LNT Cancer Risk Model
Despite what has been said, a number of mitigating factors need to be stressed.
1) The calculations presented here are based on the LNT cancer risk model. Even using the most extensive human radiation effects data we have at the present time, there are very large statistical uncertainties associated with such calculations—often larger than +100%; i.e., the probability of 0.05% cancer incidence due to a single CT scan, when expressed as a statistical range, would be zero - 0.1%.
2) The body has biological repair mechanisms that come into play when small-to-moderate levels of harmful DNA damage are produced. Therefore, the LNT risk model overestimates the fatal cancer probability at these low dose levels by quite a large amount, and this becomes larger and larger as the radiation dose level decreases and the repair mechanisms can do a more effective job. In fact, there is experimental evidence that at the radiation dose levels we are discussing here (i.e., 0 - 10 rem, or 0 – 0.1 Sievert), radiation and other carcinogens can sometimes cause a slight decrease in the fatal cancer incidence. This mechanism, which is not yet fully understood, is called “radiation hormesis”. Radiation hormesis is still in the closet among many practitioners of radiobiology, but more and more evidence to support radiation hormesis is gradually emerging, to the extent that sometime in the near future I believe the LNT model will be abandoned for the typical diagnostic radiology dose range of 0 - 10 rem (0 - 0.1 Sievert) in favor of the “hormetic dual-probability model”, which predicts a negative fatal cancer risk within this dose range. It has been a standard medical practice in Europe for centuries to expose patients to high concentrations of radon gas present in certain geological regions with the intent of strengthening their immune systems and hence making them better able to combat various diseases they may have.
3) The use of the LNT risk model has further limitations. A straight line through a very scattered array of data points often belies large statistical uncertainties in the slope of that line. Such uncertainties may hide fine structure of the radiation dose–fatal cancer incidence relationship that are not clearly evident. One such proven departure from the LNT relationship is the existence of a threshold for fatal cancer risk. In this modified model, a certain amount of radiation exposure can be tolerated without any detectable rise in the fatal cancer rate. Only after the radiation dose level exceeds a so-called dose threshold, does the fatal cancer rate start to climb in a more conventional straight-line fashion. This phenomenon of dose/effect threshold has been studied extensively, and despite the scattered nature of the experimental data some cancers have been found to conform better to the threshold version of the LNT model than to the non-threshold linear LNT model.
It is sometimes instructive to compare the lifetime risk of fatal cancer resulting from radiation exposure with the risks of death due to non-radiation-related factors. The table below shows such a comparison.
Approximate Lifetime Risk of Death Due to Receiving 1 Typical CT Scan vs. Various Non-Radiation Risk Factors.
1) The majority of radiation dose received by the general population is due to Diagnostic x-ray procedures and naturally occurring background radiation.
2) At the upper end of the dose range of single diagnostic radiology procedures (for example, a single abdominal or pelvic CT scan), the probability of induction of a fatal cancer is very roughly 0.05%.
3) With exposure only to natural background radiation, a 50 year-old individual (having accumulated approximately 300 orem/year of effective dose during each year of life) would have very roughly a 0.7% probability of contracting a fatal radiation-induced cancer.
4) The baseline lifetime fatal cancer risk in the population due to all causes is approximately 20%.
5) Our simple calculations are based on the LNT cancer risk model. Although this model pretty much utilizes the best radiation effects data we have at the present time, it has very large associated statistical uncertainties—often as large as +100%. Additionally, our calculations ignore the possible presence of a radiation effect threshold or of the protective effects of biological DNA repair mechanisms. The collective impact of these omissions in the basic LNT radiation risk model probably results in a large overestimate of the fatal cancer risk, especially at low diagnostic x-ray dose levels.
6) There is significant experimental evidence that at the radiation dose levels we are discussing here (i.e., below 10 rem, or 100 mSv), radiation can in fact cause a slight decrease in the cancer rate. This is termed radiation hormesis, and the associated dual-probability hormetic model that uses this concept will most likely soon replace the “conservative” LNT model on which all cancer inducing radiation effects in humans are traditionally based.
Pons and Fleischmann's simple fusion cell experiment for observing cold fusion. The platinum electrode absorbs the heavy hydrogen (deuterium) nuclei from the water bath and concentrates them so that their packing density in the platinum is enormously increased. This, together with the well-known "tunneling" effect, was believed to have produced fusion of the deuterium nuclei and raised the temperature of the water bath consistent with the existence of heavy-hydrogen fusion reactions.
In 1989, Dr. Stanley Pons (on the left in the photo above) and Dr. Martin Fleischmann (right), both at the University of Utah, shook the scientific world with the announcement that they had achieved cold fusion in their laboratory. At the time, Dr. Fleischmann was one of the world’s leading electrochemists. Cold fusion (if it in fact happens) occurs at “room” temperatures, as opposed to more classic, human-induced nuclear fusion that requires plasma temperatures of hundreds of millions of degrees Celsius, and presently is limited to occurring in special fusion devices such as TOKAMAKS or within fusion-based nuclear weapons.
Nuclear fusion is the name given to a nuclear process in which light nuclei, such as deuterium (hydrogen nuclei with twice the mass of normal hydrogen consisting of a neutron and a proton), are forced together to create a single nucleus of a heavier element. In this case the heavier element is helium, consisting of two protons and two neutrons. Such fusion of light elements (below a mass number of 55 (which is Iron-55) causes a spontaneous release of net binding energy: specifically, the sum of the binding energies of the two pre-fusion nuclei minus the binding energy of the single, heavier post-fusion nucleus.
Fusion—whether cold (i.e., at room temperature) or, more conventionally, hot (i.e., at temperature of hundreds of millions degrees C)-is only useful in terms of energy generation if it occurs between very light elements, i.e., of mass numbers generally less than 10. As the mass number of elements increases from 1 (hydrogen) to 55 (iron), fusion reactions having radiated energy as a byproduct can take place, but above mass numbers of around 10 their energy yield is greatly decreased. Fusion of two nuclei, each above a mass number of 55, theoretically produces no energy, so is not useful either for military or for civilian applications. Heavy nuclei (most commonly uranium-235 or plutonium-239) can only produce energy when they are broken apart—or fissioned.
Nuclear Fusion: Military and Civilian Uses
Normally, to initiate nuclear nuclear fusion reaction, two light nuclei must be forced together under conditions of extremely high mechanical pressure or extremely high temperature, since in the absence of these external factors, the mutual repulsion of nuclei due to their positively charged constituent protons prevents them from approaching each other closely enough to fuse.
Military Uses of Fusion: Thermonuclear Bomb
In the thermonuclear, or hydrogen bomb, the sum of the nuclear binding energies of two fusing nuclei of deuterium and tritium (which is hydrogen with one proton and two neutrons)--minus the binding energy of the resulting helium-4 nucleus, is released as a single pulse of 17.6 MeV ionizing energy, consisting of energetic protons, neutrons, and gamma rays.
Why are fusion bombs so much preferred to uranium or plutonium fission bombs? If expressed on the basis of bang-per-unit-mass-of-explosive-material, a fusion reaction produces about 10 times more “bang” than a fission reaction.
Civilian Uses of Fusion: Fusion Reactor
In a fusion reactor (a not-yet-functioning technology), deuterium gas is initially electrically heated to hundreds of millions of degrees C, becoming a “plasma”, a mixture of nuclei and completely detached electrons. A powerful magnetic field is applied to the plasma to confine it within a restricted volume, where the vibration of the deuterium nuclei, due to their high temperature, supplies sufficient energy to cause some of them to overcome their mutual electrical repulsion and fuse together. After the initial plasma has been created--or “ignited”, to sustain the plasma creation process, some of the energy released as part of the fusion reactions is used to sustain the high temperature of the deuterium plasma and to power the magnetic field. The remainder of the energy released is used to generate heat and ultimately steam and electricity.
Nuclear Fusion Reactions
Deuterium, sometimes called heavy hydrogen, is symbolized as H2 or D. Each nucleus consists of one neutron (n) and one proton (p). He4* is the symbol for Helium-4 (sometimes referred to as an alpha particle) and the asterisk indicates when it is in an energetically excited state.
Deuterium-deuterium (D-D) fusion is a two‐step process and can be summarized as follows:
Step 1: D-D Fusion event occurs
D + D > He4* He4* is created in a highly excited state
Step 2: He4* immediately de-excites via three possible pathways
1) He4* > n + He3 + 3.27 MeV released as kinetic energy of n and He3 [50% likely]
2) He4* > p + H3 + 4.02 MeV released as kinetic energy of p and H3 [50% likely]
3) He4 * > He4 + 4.02 MeV released as kinetic energy of He4 [0.0001% likely]
Note: no neutrons or tritium are produced in pathway 3 (see following comments)
So, total energy released is [50% x 3.27]+[50% x 4.02]+[0.0001% x 4.02] = 3.64 MeV
But only pathways 1) and 2) make significant contribution to D-D fusion energy release, since pathway 3) occurs only 1 in 1,000,000 times.
Controversies Around Pons & Fleischmann’s Cold Fusion Experiment
Pons and Fleischmann claimed that they had achieved cold fusion because of an observed rise in temperature of the heavy water bath used in their experiment (see next section for a brief description of the experimental setup). However, skeptics of these experiments have pointed out that if cold fusion had occurred, in addition to a rise in temperature of the heavy water bath, two other observations were necessary:
First, through pathway 1) described above, the fusion reaction produces energetic neutrons. After the neutrons have been moderated (i.e., slowed down) by the small number of light hydrogen nuclei remaining in the heavy water bath, they are strongly absorbed by those same light hydrogen nuclei in nuclear reactions that produce 2.2 MeV gamma-rays. But nowhere close to the required number of 2.2 MeV gamma-rays has ever been measured in any cold fusion experiment.
Second, through pathway 2) described above, the fusion reaction produces H3, or tritium nuclei. Now tritium is radioactive so its presence should be very easily measurable using fairly basic nuclear instrumentation. But once again, no tritium has ever been measured in the heavy water bath in any cold fusion experiment.
Supporters of cold fusion, however, have defensive counter-arguments up their sleeve: Note, they point out, that if pathway 3) were the dominant one, significant amounts of neutrons or tritium would not be produced. But listen here, say the skeptics, the likelihood of pathway 3) being followed following a DD fusion event is only 1 in 1,000,000 compared to the other two pathways which are a million times more likely to occur. Aha, retort the cold fusion supporters, but what if, by some yet not understood mechanism unique to the room temperature fusion conditions, that third pathway did in fact become the dominant one; that would explain why very little tritium or 2.2 MeV gamma rays were observed.
The logic is sound, except there is no evidence to support pathway 3) being more likely in cold vs. hot fusion. As a peripheral issue, if the temperature rise of the heavy water bath in Pons and Fleischmann’s experiment is used to compute the number of neutrons that should have been produced if cold fusion were responsible for their production, Pons and Fleischmann would be “ex-Pons and Fleischmann”, since the neutrons produced would have killed them.
Experimental Setup of a Cold Fusion Experiment
The experimental setup for observing cold fusion is remarkably straightforward. One starts with a water bath filled with heavy water. One places two metallic electrodes in the water bath, one of them made of platinum. The two electrodes are connected to a DC voltage source, the positive side of the circuit to the platinum electrode. Finally, one inserts a thermometer into the water bath to measure any rise in temperature. The experimental setup is shown in the second picture above.
Platinum has the property of absorbing hydrogen atoms from water. When the hydrogen atoms have been absorbed by platinum they become much more densely packed than they would be in water or other organic substances. The likelihood that two particles combine is not only determined by their mutual proximity (enhanced by the platinum), but also by the “tunneling effect”. Tunneling occurs in situations where, for various physical reasons, two particles are prohibited from combining; such as, for example, two positively-charged hydrogen nuclei. But, due to the statistical nature of nuclear reactions, there is some finite likelihood, albeit exceedingly small, that the particles will in fact combine. This effect is referred to as tunneling. Tunneling is also the nuclear mechanism by which alpha particles are emitted from some very heavy nuclei such as uranium-235 or radon-222, despite such emission being prohibited by the balance of nuclear forces.
Within the platinum electrode, even if the tunneling likelihood for DD fusion is extremely small, due to the sheer enormous number of hydrogen atoms present, the total number of DD reactions might, in absolute numbers, be substantial and could theoretically cause a measurable rise in the temperature of the water bath. That is the explanation provided by many supporters of cold fusion of how DD fusion can occur at room temperature. And, if under those innovative and poorly understood experimental conditions, pathway 3) became the dominant one, only very small amounts of 2.2 MeV gamma rays or tritium would be produced.
If a rise in the temperature of the water bath in the cold fusion experiment were agreed as being the only marker of successful cold fusion, even that phenomenon has not been reliably demonstrated.
Following the cold fusion announcement, Dr. Pons became the chairman of the chemistry department at the University of Utah. Dr. Fleischmann passed away in 2012.
Candace Gilet has suggested some rules and principles for the pursuit of honest science, which are listed below:
1) The scientific community is responsible for checking the work of community members. Through the scrutiny of this community, science corrects itself.
2) Scientists actively seek evidence to test their ideas. They strive to describe and perform the tests that would prove their ideas wrong and/or allow others to do so.
3) Scientists take into account all the available evidence when deciding whether to accept an idea or not, even if that means giving up a cherished hypothesis.
4) Science relies on a balance between skepticism and openness to new ideas.
5) Scientists often verify surprising results by attempting to replicate the experiment.
6) In science, discoveries and ideas must be verified with multiple lines of evidence.
7) Data require analysis and interpretation. Different scientists may interpret the same data in different ways.
Unfortunately, Pons and Fleischmann eschewed many of the above before announcing their discovery of cold fusion.
Despite everything said, there is a renewed research effort at many top universities, and a number of positive results are being reported. The current experimental data on cold fusion is, however, still "in the closet", a condition shared by a few other scientific research areas, such as radiation hormesis, where at low radiation dose levels, typically like those in diagnostic x-ray imaging, it has been demonstrated in a number of well designed experiments that cancer incidence in fact decreases.
It would indeed be refreshing if the doors of all scientific "closets" could be thrown open, and the controversial data that emerge be analyzed objectively by respected scientists. Science would progress more rapidly to the benefit of us all.
If during such a 21st century scientific enlightenment, cold fusion was concluded to be a reality, the implications for mankind would be staggering. As mentioned in the introduction, using the top 6 inches of water in Lake Superior as hydrogen fuel for cold fusion, the energy needs of our entire planet could be met for 100 years, and this energy would be much cleaner than what is currently available with coal, natural gas, or nuclear energy generation.
The physicist Carl Sagan said it well: “Extraordinary claims require extraordinary proof.” So far, no compelling proof of the existence of cold fusion has emerged, but we should all keep open minds.
Some time ago, a number of stories surfaced in the media about the U.S. military’s use of depleted uranium as a component of defensive armor plating and armor piercing tank shells, and the potential health risks to civilian and military populations from the use of this technology.
What is Depleted Uranium?
Depleted uranium is natural uranium, as it is dug out of the ground, but with most of the radioactive uranium-235 isotope removed. The production process is, in fact, exactly the opposite of enriching natural uranium for use as fuel in nuclear reactors or weapons-grade uranium for military applications. Natural uranium is about 99% uranium-238 and 0.7% uranium-235; there are a few other isotopes of uranium present in very small quantities, but these are not relevant to the present discussion. The depletion process further reduces the small amount of uranium-235 present, down to a level of around 0.2%. The resulting material is most commonly called depleted uranium, but is also referred to as DU, Q-metal, or D-3.
Uses of Depleted Uranium
DU is used in a number of civilian applications. Alloyed with other metals, such as tungsten or molybdenum, it has an extremely high physical density, nearly double that of lead, as well as having much greater hardness characteristics, almost complete lack of “flow” if installed without external support, and exceptional x- and gamma-ray shielding properties. Civilian uses of DU include counterweights in aircraft and ships’ keels, internal radiation shielding for radiation therapy accelerators, shielding for high x-ray energy industrial radiography equipment, and shielding of containers used to transport radioactive materials.
Military uses of DU alloys, often called Staballoys, include armor-piercing projectiles and defensive armor plating. DU has the property of being mechanically self-sharpening and chemically self-incendiary when experiencing high mechanical impact pressures, making it a very useful component of shell and missile design. Some late models of the U.S. Abrams tank built after 1998 have DU reinforcement as part of the armor plating in the front of the hull and the front of the turret. DU is also used in some thermonuclear weapons to enhance their destructive effect with a hail of small, extremely heavy, and hard projectiles--somewhat similar in concept to conventional explosive cluster bombs.
Health Effects of Depleted Uranium
The use of DU for non-thermonuclear weapon military applications is controversial because of concerns about its potential long-term health effects to civilian populations and military personnel. Kidney, brain, liver, heart, and numerous other organs can be damaged by internal exposure to DU aerosol, because in addition to being very slightly radioactive due to the very small residual amount of uranium-235 present, DU is chemically extremely toxic. DU aerosol airborne powder or dust produced following impact and combustion of DU-enhanced munitions can contaminate wide areas around a target impact site and then be inhaled by civilians and military personnel. In 2003, during a three-week period of the war in Iraq, 1,000-2,000 metric tons of DU munitions were used by U.S. forces, mostly in urban environments. It should be emphasized that DU used for civilian applications poses no health risks whatsoever.
Controversies Surrounding Toxicity of Depleted Uranium
The toxicity of DU is, nevertheless, still a point of controversy. Studies using cultured cells and laboratory rodents indicate the possibility of increased rates of leukemia, as well as genetic, reproductive, and neurological disorders from chronic DU exposure. A 2005 epidemiology review concluded, “The human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU”.
In total contrast, however, the World Health Organization has stated that, “No consistent risk of reproductive, developmental, or carcinogenic effects have been reported in humans [following exposure to DU]”. But many scientists have called into question the objectivity of the WHO report. It should be mentioned again that in civilian uses of DU, no known risks are present.
Possible Alternatives to Depleted Uranium for Military Applications
Even if a replacement could be found for DU in military applications, health concerns would not be alleviated, since possible replacement materials for DU, such as tungsten-cobalt or tungsten-nickel-cobalt alloys, possess extremely carcinogenic properties themselves; far in excess, in fact, of those claimed for DU. So there appears to be little practical alternative to the continued military use of DU.
The European Parliament has repeatedly passed resolutions requesting an immediate moratorium on the further use of DU munitions, but France and Britain, the only EU states that are permanent members of the UN Security Council, have consistently rejected calls for such a ban, maintaining that its use continues to be legal and that the putative health risks in humans are “still unsubstantiated”. Unfortunately, the use of DU seems to be a critically important component of U.S. military technology, and it is unlikely that its use will be curtailed in the foreseeable future. We can only hope that a movement toward global peace will eventually retire the use of depleted uranium in military applications.
Depleted Uranium (DU) is composed of 99.8% uranium-238 and 0.2% uranium-235. Civilian uses of DU include counterweights in aircraft and ships’ keels, internal radiation shielding for radiation therapy accelerators, shielding for high x-ray energy industrial radiography equipment, and radiation shielding of containers used to transport radioactive materials. Military uses of DU alloys, often called Staballoys, include defensive armor plating and armor-piercing projectiles. Regarding health risks of DU, the World Health Organization has stated that, “No consistent risk of reproductive, developmental, or carcinogenic effects have been reported in humans [following exposure to DU]”. But many scientists have called into question the objectivity of this report. The European Parliament has repeatedly passed resolutions requesting an immediate moratorium on the further use of DU munitions, but France and Britain, the only EU states that are permanent members of the UN Security Council, have consistently rejected calls for such a ban.
Since their discovery in 1896, x-rays and gamma rays have been the cornerstones of diagnostic radiology and radiation oncology. Within the last decade, however, new approaches to radiation oncology with x-rays and gamma-rays have been developed that go even further toward increasing the radiation dose delivered to tumor while reducing the radiation dose delivered to surrounding healthy tissues and organs—which is the primary technical goal of radiotherapy.
Intensity Modulated Radiotherapy (IMRT)
An important development in radiation oncology about a decade ago is called Intensity Modulated Radiotherapy, or IMRT. With this technology, the x-ray beam during a treatment is not of constant intensity, as it was in older technologies, but varies based on the changing irradiation geometry during a treatment delivery. In conjunction with a more established technology called the variable leaf collimator, IMRT is often able to produce better dose distributions within the patient’s body than pre-IMRT technologies. Not all cancer treatments are improved in quality using IMRT, but most are.
Image Guided Radiotherapy (IGRT)
An original deficiency of IMRT was that it relied on the tumor remaining in the same anatomical position during a treatment, as well as maintaining the same location in the body from one treatment fraction to the next. In reality, however, many tumors tend to move around. For example, lung tumors can shift positions with every breath, while tumors of the colon can change position from day-to-day due to migrations of bowel gas. In both cases, this can make “aiming” the radiation treatment very uncertain.
A solution to this problem was developed about a 15 years ago and is called Image Guided Radiotherapy, or IGRT. CT scans of the patient are taken by specialized CT scanners that produce movie-like scans that record, for example, the respiratory movements of lung tumors. Various devices are placed on the patient’s chest to monitor respiratory movement using video cameras while the CT scan is in progress. The same devices are used during actual treatment, and a correlation is established between the instantaneous position of the tumor and the phase of the respiration cycle, which enables IMRT enabled therapy machines to irradiate the tumor only when it is in a predetermined known position.
On Board Imaging
A related development is CT-like imaging devices that are integrated into the treatment machines themselves. Known as on board imaging, or OBI, these devices use the normal rotation around the patient of modern linear accelerators to collect x-ray transmission measurements through the patient that are immediately reconstructed into CT-like images. OBI can detect, for example, shifts in anatomical tumor position from day-to-day as well as during a treatment. With this additional information, tumor location corrections can be applied from one treatment to the next as well as more recently, using specially adapted treatment machines, during the course of each individual treatment. Not all radiotherapy centers yet have OBI capabilities, but many larger facilities and academic centers do.
A completely different approach to radiotherapy, available for many years but in very few treatment centers, is called Proton Therapy. Protons are nuclear particles that can be accelerated to very high energies by machines called cyclotrons or synchrotrons. When a beam of such high-energy protons enters the patient’s body, it delivers its energy quite uniformly within a specifically chosen range of depths that is made to vary as the proton beam rotates around the patient. Unlike beams of x-rays that pass all the way through the body and deliver unwanted dose to healthy tissues beyond the depth of the tumor, proton beams stop abruptly at specified depths; this means that no dose at all is delivered beyond the deepest location of the tumor and, therefore, downstream healthy tissues and organs are protected. Consequently, for many applications in radiotherapy, proton beams are able to deliver radiation with more precise conformation to tumor volume and with less injury to healthy tissues and organs than x-rays. For this reason, protons are an excellent choice for a specific class of tumors, where unwanted radiation outside the tumor volume can lead to especially serious side effects.
Since proton therapy involves very expensive facilities (costing typically $100,000,000 to $150,000,000), insurance companies only cover this treatment for a limited number of cancers where it has been clearly demonstrated that protons are more effective—or at least produce fewer side effects than x-rays. However, with time the cost of proton facilities will no doubt decrease, which should expand the range of applications for which insurance coverage is available. Examples of cancers where protons have been found to be superior to x-rays are tumors of the spinal cord, childhood brain tumors, and prostate tumors.
Today, there are still not many proton treatment facilities in existence: approximately 14 centers in the U.S. with 12 more in the planning or construction stage, with about 49 centers mainly in Europe and Japan.
Using similar principles to proton therapy, one of the latest approaches to particle therapy uses carbon ions in place of protons. Carbon ions, approximately 12 times heavier than protons, require much higher energies and even more costly accelerators than protons, but in return produce even more conformal dose distributions than protons and possess radiobiological characteristics that make them especially suited to treating certain types of highly resistant cancers. Typically, the accuracy of heavy-ion beam delivery is better than 1 mm. Carbon ions also produce biological damage that is virtually unrepairable, so tumors and normal tissues are able to recover much less from this kind of radiation than from x-rays or protons. Because the criteria for beam delivery with heavy-ions are even higher with heavy-ions than with protons (due to the potentially devastating harm the heavy-ions can cause if misdirected), it is considered almost mandatory to mount the beam delivery systems on isocentrically mounted rotating gantries, so that the patient does not move after setup or during changes in beam direction, which would greatly reduce the accuracy of beam delivery. However, because the accelerating energies for heavy-ion beams are very much higher than for protons, the physical size of the gantries needed is enormous. At the present time there is only one clinically operational heavy-ion 360 degree rotating gantry facility in the world: in Heidelberg, Germany. The gantry in this facility weighs 670 tons and the accuracy of beam delivery is stated to be sub-millimeter. The picture at the top of this article shows the massive dimensions of the Heidelberg heavy-ion gantry.
An obvious question arises as to whether even heavier ions could be used which would provide still better radiobiological characteristics and further improved beam delivery accuracy. However, for ions heavier than carbon, these two characteristics in fact rapidly deteriorate, because heavier ions break up into lighter ones as they penetrate through tissue, thereby reducing beam delivery accuracy and the quality of the beam’s radiobiological characteristics.
So far, approximately 13,000 patients have been treated with carbon-ion beams in Japan, Germany, Italy, and China. But the U.S., despite having pioneered heavy-ion treatment at the Lawrence Berkeley National Laboratory in clinical trials that ran from 1975 through 1992, still lacks a clinical treatment capability.
The enormously high cost of heavy ion therapy, many times that for proton therapy, sheds some doubt on whether heavy-ion facilities could ever exist on patient income alone without the large subsidies from government funding that they presently enjoy.
However, from a financial viewpoint, treatment with heavy-ions has one mitigating advantage over treatments with x-rays or even protons. Because of the propensity of heavy–ion beams to cause essentially unrepairable damage, there is less benefit to fractionation than with other forms of radiation. In fact, for some cancers, heavy-ion treatments are delivered in 4-6 fractions rather than the 30-40 fractions that would be used for treating the same cancers with x-rays or protons. This proportionally reduces the cost of the heavy-ion treatments by a factor of approximately 4.
Staff at the National Cancer Institute have estimated that a total of 60 heavy-ion treatment facilities would meet the clinical need for heavy-ion treatment worldwide for the types of cancers for which heavy-ions have been shown to be beneficial.
Fast Neutron Therapy
Fast neutron therapy uses high-energy neutrons in place of x-rays for treating certain cancers for which the dependence of tumor regrowth on the presence of oxygen is a problem for local disease control.
Therapeutic fast neutrons have energies of 50-70 MeV. At these energies, fast neutron beams have penetration in tissue that is similar to 6-10 MeV (i.e., intermediate energy x-ray beams). Fast neutrons interact with tissue primarily by causing recoil of the tissue elements hydrogen, carbon, oxygen, and nitrogen. The radiobiological characteristics of these recoiling ions (with the possible exception of hydrogen) are very well suited for treating certain cancers that depend on the presence of oxygen to survive and grow.
The parameter that defines how much less dose needs to be delivered with fast neutrons vs. x-rays to produce the same effect on tumor is called the relative biological effectiveness, or RBE. But, as fast neutron dose decreases, RBE increases. This leads to potential complications at normal-tissue/tumor boundaries, where a sharp dose gradient exists and one would expect normal tissues to receive significantly less dose than tumor. However, since the physical doses to normal tissues around this boundary are significantly lower, the RBE is significantly higher; which led to unexpected normal tissue complications in the early fast neutron trials in the U.S. Similarly, as the number of fast neutron fractions for a treatment increases, dose-per-fraction decreases and, once again, fast neutron RBE increases.
This latter phenomenon led to the earliest fast neutron trials in the U.S. at the Lawrence Berkley Laboratories in the 1940s being an unmitigated disaster, since fast neutron RBE was determined in in vitro cell cultures, and no allowance for increased RBE was made when fractionated treatments were delivered to the first fast neutron patients.
Fast neutron therapy has been administered to about 30,000 patients in Germany, Russia, South Africa, and the United States. In the U.S., four treatment centers exist in Seattle, Washington, Detroit, Michigan and Batavia, Illinois, although currently only three are actively treating patients.
The efficacy of fast neutron treatments has been convincingly demonstrated for many forms of cancer; but specifically head-and-neck cancers, which often depend on the presence of oxygen to grow, have been excellent candidates for fast neutron therapy.
So why has fast neutron therapy not caught on? The reasons for this are related to technical and financial issues. From a financial viewpoint, as with current x-ray and proton therapy treatments, the fast neutron treatment heads—at least in the U.S. facilities--are mounted on isocentrically rotating gantries. Since the production of fast neutrons for therapy requires the acceleration of either protons or deuterons to high energies, the cost of isocentrically mounted fast neutron treatment heads is equal to the cost of isocentrically mounted proton treatment heads plus the additional hardware necessary to create and shape the fast neutron beam. The resulting cost of an isocentrically mounted fast neutron treatment facility is significantly higher than the cost of a proton treatment facility.
From a technical viewpoint, there have been many enhancements to x-ray therapy over the past 20 years—as described in the earlier sections of this article; for example, IMRT, IGRT, on-board imaging, sophisticated treatment planning software, etc. Because fast neutron treatment facilities are still experimental and highly subsidized, many of the above developments in sophistication of treatment have not percolated down to the fast neutron facilities. Therefore, fast neutron facilities, other than enjoying rotating gantries, have not been able to take advantage of many of the technological treatment enhancements that have helped x-ray therapy.
A new type of x-ray radiotherapy was developed about 15 years ago called Tomotherapy. Tomotherapy machines look a bit like CT or MRI scanners. The patient enters a tunnel on a moveable couch and a built-in CT scanner ensures accurate positioning. The patient then moves slowly through the tunnel while many pencil-sized beams of x-rays are rapidly turned on and off in a carefully programmed sequence while the x-ray source rotates around the patient. For many types of cancers, Tomotherapy can provide a higher quality treatment than IMRT, and at times can almost match the precision of proton therapy.
An interesting innovation in radiotherapy has been the Cyberknife, a robotic radiotherapy system programmed to aim a moving pencil-like x-ray beam at the patient’s body from various directions (see 1st picture above). Like proton therapy, Cyberknife is most useful for treating small primary or metastatic tumors located within particularly sensitive normal tissues. The Cyberknife is usually integrated with an IGRT enabled CT scanner, so its beam can also be programmed to “follow” the movement of a tumor as a treatment progresses.
For small brain tumors, brain metastases (tumors from elsewhere in the body that have spread to the brain), and arteriovenous malformations (AVMs) the Gammaknife has proven itself a very effective form of radiotherapy. Introduced about 20 years ago, the Gammaknife uses 101 individual sources of the radioisotope cobalt-60 that send individual narrow pencil beams of gamma rays (equivalent to x-rays for this discussion) to crossover at a “focal point”. The patient’s head is then oriented and automatically moved so that this focal point is swept throughout the volume of the tumor to be treated, producing a very conformal dose distribution around one or more tumors. The gammaknife procedure requires that the patient have a stereotactic frame placed on his/her head, which involves four screws that penetrate through the scalp and partially into the skull, so that the MRI images initially taken for treatment planning can be spatially correlated with the gammaknife’s internal coordinate system. For this reason, the Gammaknife is designed to do single treatments rather than the 20-40 treatment fractions comprising most radiotherapy regimens. In addition to being an effective treatment for intracranial tumors and metastases, the gammaknife is frequently used to treat trigeminal neuralgia. This nerve disorder causes severe and disabling facial pain. By strategically placing a high-dose lesion on the trigeminal nerve where it exits the spinal cord, the pain signals can be be blocked. Gammaknife provides the highest level of precision necessary to effectively treat trigeminal nerve disorders. This procedure is more frequently being used as the first treatment for trigeminal neuralgia when medications fail to provide adequate pain relief.
Neutron Capture Therapy
The efficacy of radiotherapy in the future will markedly improve when techniques are developed to treat metastatic disease as effectively as local disease. One such emerging technology, still in its early clinical trials, is called neutron capture therapy (NCT), once dubbed “cellular surgery” because of its potential ability to irradiate individual tumor cells rather than visible macroscopic tumor volumes--as all other radiotherapy technologies discussed here are limited to doing. NCT is a very complex form of radiotherapy, combining the principles of chemotherapy (in that individual tumor cells are initially targeted by a chemical compound), and heavy-particle radiotherapy (with the radiobiological advantages brought by that form of therapy.
The patient is initially infused with a chemical compound that has been designed to selectively concentrate in tumor cells—both those within the primary tumor volume as well as the individual islands of cells near the peripheries of primary tumor volumes. Each molecule of the compound is labeled with a number of atoms of boron-10 (typically a “cage” of 12 boron-10 atoms). Boron-10 has an unusually high proclivity for absorbing low energy neutrons, immediately after which it splits into two charged particles: a lithium-7 ion and an alpha particle. These particles have a range in tissue of 4-10 µm, roughly equivalent to a typical tumor cell diameter. After the chemical compound has been given sufficient time to reach an optimal concentration ratio between tumor and normal cells, the local region of the body is irradiated with a specially designed “epithermal” neutron beam from a specially modified research nuclear reactor (in the U.S., research reactors at the Brookhaven National Laboratory and at the Massachusetts Institute of Technology were converted to deliver NCT therapy to patients, but are not active any more).
The epithermal neutrons entering the tissue (after losing most of their energy through collisions) are captured by the boron-10 nuclei, resulting in a cellular level charged particle distribution that mimics the distribution of the boron labeled compound. Consequently, much higher radiation doses can be delivered to tumor cells, both within the primary tumor volume as well as isolated tumor cells near the tumor’s periphery, than to neighboring normal cells.
There have been a number of clinical trials of NCT at some of the 10 or so sites in the world where such technologies exist. NCT has been shown to sometimes provide better therapy for certain tumors than any other radiotherapy available; for example, certain very difficult-to-treat head and neck cancers respond very well to NCT, probably because the radiobiological properties of NCT do not permit the presence of oxygen in tumors to support their regrowth. However, the complexity and very high cost of delivering NCT severely limit its continuing development and its future.
Important advances in radiotherapy and cancer imaging over the past 20 years or so have consistently increased the “therapeutic ratio”, i.e., the ratio of dose to tumor divided by dose to normal tissue; this, in turn, enables more dose to be delivered to tumor for the same level of complications in normal tissues and results in better local tumor control. This, combined with a deeper understanding of the biology of tumors, has further resulted in very significant improvements in local tumor control without concomitant increases in normal tissue complications.
We have discussed a number of advances in radiotherapy technology, including intensity modulated radiotherapy, image guided radiotherapy, on-board fluoroscopic imaging, proton therapy, heavy-ion therapy, fast neutron therapy, tomotherapy, cyberknife, gammaknife, and neutron capture therapy.
The next quantum jump in the efficacy of radiotherapy in the future will most likely be the ability to treat metastatic disease as effectively as we can now treat local disease. Neutron capture therapy promises to achieve this to a certain extent, but its cost will greatly inhibit its further development.
With many of the technologies discussed, the extremely high cost poses a dilemma for society: how much public funding is it reasonable to expend on treating a subset of cancers that respond particularly well to certain very costly new therapies? It is time, indeed, to bring in permanent federal and private sector support for ultra-expensive cancer treatments.
An asteroid designated 1462-Zamenhof is named after Ludwig Lazarus Zamenhof, the inventor of the artificial language Esperanto ("hope"), who happens to be the great-uncle of the author. This picture has absolutely nothing to do with dark matter, but is just an opening for the author to brag a little about his ancestor!
Astrophysicists have hypothesized the presence of so-called dark matter in the universe because of discrepancies between the calculated mass of large astronomical objects (galaxies, stars, etc.), as determined from their gravitational effects on the rotation of neighboring astronomical objects, vs. their mass calculated from the “luminous matter” they contain (primarily gas and dust).
One somewhat exotic theory suggests the existence of a hidden valley, a parallel world made of dark matter, having very little in common with matter we know, that can only interact with our visible universe through gravity! Many experiments to detect proposed dark matter particles through non-gravitational means are under way.
The Discovery of Dark Matter
The first scientist to postulate the presence of dark matter based upon reliable scientific evidence was Vera Rubin in the 1960s and 1970s. Rubin calculated the mass of galaxies based on their observed rotations, and also based on the luminous matter they contained. There were significant inconsistencies in the masses calculated by these two approaches, which led Rubin to calculate and hypothesize a yet unidentified presence of additional mass in the universe that would enable the results obtained by these two calculation approaches to agree. This “excess” mass was termed dark matter and is now believed to constitute approximately 85% of the mass of the universe.
Uncertainties about Dark Matter Predictions
As with any theory, there are some observations that do not agree with the predictions of dark matter as described above. For example, astrophysicists looking in the volume of space around our own sun have been unable to predict the excess gravitational force that the presence of dark matter should produce. Although the existence of dark matter is now generally accepted by the mainstream scientific community, some alternative theories of gravity have been proposed which try to account for the anomalous observations without requiring additional matter. However, these theories cannot account for the gravitational observations in galaxy clusters.
Related Concepts – Dark Energy
There is also still unverified energy present throughout the universe called dark energy. Dark energy is believed to be the unseen influence in the universe that is causing the expansion of the universe to accelerate with time. Without the postulation of dark energy, astrophysical calculations in fact suggest that there is a reduction in the rate at which the universe is expanding with time; but astrophysical observations all indicate an acceleration of expansion with time, so dark energy is the additional energy that must be invoked in the model of the universe to support the physics observations. Adding the mass-equivalent of the predicted dark energy to the calculated mass of dark matter, it is estimated that over 95% of the mass of the universe is a combination of dark matter and the mass-equivalent of dark energy.
The Composition of Dark Matter
Dark matter is believed to be composed of weakly interacting massive particles called neutralinos that interact only through gravity and the nuclear weak force. Alternative explanations have been proposed, but there is not yet sufficient experimental evidence to determine whether any of them is correct.
Dark matter is a form of invisible mass that pervades the universe. It is believed to be composed of neutralinos, yet undetected elementary particles predicted by supersymmetry theory. Approximately 85% of the mass of the universe is believed to constitute dark matter, an inference drawn from the discrepancy between gravitational observations vs. gravitational effects calculated on the basis of visible matter. There is also predicted but still undetected energy present throughout the universe called dark energy, which is an additional energy postulated to exist in the universe to resolve the disagreement between astrophysical calculations, which suggest a universe that is expanding but at a slower rate with time and physical observations that suggest the expansion to be accelerating with time. Adding the mass-equivalent of this predicted dark energy to the mass of dark matter in the universe, it is estimated that over 95% of the mass-equivalent of the universe is due to the sum of dark matter and dark energy (see the second picture above).
Dark matter plays a central role in modeling of cosmic structure formation and galaxy formation and evolution and has measurable effects on the anisotropies observed in our cosmic microwave background. All these lines of evidence suggest that galaxies, clusters of galaxies, and the universe as a whole contain far more matter than is inferred through “visible” means. Further knowledge of dark matter and dark energy would add greatly to the understanding of the creation of elementary particles in the universe within a second or less following the big bang.
Hmm, seems like a giant astrophysics error that physicists are now trying to reconcile. Good thing the humanities are not that uncertain ...
Diagrammatic depiction of the principles behind carbon dating. Carbon-14 (radioactive, with half-life of 5,730 years) is created in the atmosphere by nuclear interaction of cosmic rays and atmospheric nitrogen. Plants absorb the carbon-14 via carbon dioxide and people and animals absorb carbon-14 by consuming the plants. An equilibrium between carbon-14 in the atmosphere and in living organisms makes the carbon-14 / carbon-12 (non-radioactive) ratio remain constant. But when a living organism dies, no more carbon-14 is absorbed, so as the carbon-14 decays, the carbon-14 / carbon-12 ratio decreases with time. Measuring this ratio precisely provides the age of the organism at death
The general technique of radiometric dating was first published in 1907 by Bertram Boltwood, and is now the principal source of information about the absolute age of rocks and other geological entities. It can be used to date a wide range of natural and man-made materials including the age of the Earth itself.
Carbon Dating of Organic Materials
A specific subgroup of radiometric dating is called Carbon Dating, and is generally used to date plant and animal remains. Carbon dating was invented by Willard Libby in the late 1940s, and soon after became a standard tool for archaeologists and anthropologists.
The most prevalent radionuclide found in the tissues of living plants and animals is carbon-14 (*C-14) (the asterisk is there as a reminder that the carbon isotope is radioactive). *C-14 is produced by the interaction of neutrons (a major component of cosmic rays) with nitrogen in our atmosphere. The *C-14 then combines with oxygen in the atmosphere to produce *C-14 carbon dioxide, which eventually makes its way into living plants. The *C-14 then enters the food chain and enters the bodies of living animals. The *C-14 in living plants and animals gradually decreases in concentration with time due to radioactive decay, but it is continually replenished by the intake of additional *C-14 through plant respiration and the food chain. Eventually, a constant equilibrium concentration of *C-14 develops in the living plants and animals, in parallel with a corresponding equilibrium concentration of the non-radioactive isotope of carbon, C-12. Both C-12 and *C-14 are regulated to equilibrium through their absorption (as C-12 and *C-14 carbon dioxide) and their subsequent excretion as sugars. The difference here is that the C-12 does not radioactively decay while *C-14 does, so the equilibrium achieved is at a lower plant concentration for *C-14 than for C-12. In animals, carbon is absorbed via plant food and excreted as carbon dioxide in breath. The net result of the above processes is that the ratio [*C-14 / C-12] in living plants and animals also equilibrates to a constant value.
When the plant or animal dies, the absorption of C-12 and *C-14 abruptly ceases. Consequently, while the C-12 that was present at the time of death remains at a fixed concentration, the *C-14 that was present at the time of death gradually decreases in concentration through radioactive decay with a half-life of 28,650 years. Therefore, the ratio of [*C-14 / C-12] also decreases with time after death of the plant or animal. After about 5 half-lives of *C-14 (i.e., about 716,250 years), this ratio has fallen to almost zero (5 half-lives is a rule-of-thumb for estimating the time required for a radionuclide to decay to approximately 1% of its initial activity). By accurately measuring the ratio of [*C-14 / C-12] within this time window using sensitive nuclear analytic instruments such as mass spectrometers, the time since death of a living organism can be determined quite accurately over a time scale of approximately 60,000 years.
The Influence of Nuclear Weapons Testing on Carbon Dating
Carbon dating was more accurate prior to the 1950s than it is today. That is because after the 1950s, above ground testing of nuclear weapons by China, the U.S., and the Soviet Union altered the previously constant [*C-14 / C-12] ratio in our atmosphere, partially invalidating the theory of carbon dating determinations. Although approximate corrections can be made, the ultimate accuracy of carbon dating became significantly lower after the 1950s.
Radiometric Dating of Inanimate Materials Using Other Radionuclides
Other forms of radiometric dating, relying on the absorption of radionuclides other than C-14 by living organisms or inanimate materials, provides the ability to date animal and plant materials over much longer time scales than is possible with carbon dating, and is used to date the inorganic mineral component in animal bones as well as minerals and rocks.
The principle of these other radiometric dating techniques is based on measuring the decay rate of one specific radionuclide relative to an assumed fixed concentration of a second radionuclide in the material of interest that has a much longer half-life. Potassium-argon and uranium-lead dating are the most common examples of this method of radiometric dating.
Potassium–argon (abbreviated K–Ar) dating is used most frequently in geochronology and archaeology. This method is based on a very precise measurement of the conversion rate of a radioactive isotope of potassium *K-40 due to radioactive decay into the stable gas Argon-40. Potassium contains a trace quantity of the naturally occurring radionuclide *K-40 and is a common element found in many materials such as micas, clays, and minerals. In these materials, argon-40, the gaseous stable decay product of *K-40, is able to escape these materials when they are in a molten or uncrystalized state, but is prevented from escaping after they have solidified or recrystallized, and therefore starts to accumulate. The time since a rock sample solidified or recrystallized is obtained by accurately measuring the ratio of the Ar-40 accumulated to the amount of *K-40 remaining in the rock. The extremely long half-life of *K-40 (1.26 billion years) allows this method of radiometric dating to be used to calculate the absolute age of samples of rock from a few thousand years to a few billion years, as well as the age of the earth itself.
Other radionuclide pairs used in this form of radiometric dating are uranium-lead, rubidium-strontium, and uranium-thorium. The choice of radionuclides depends on the chemical form of the materials to be dated, on the age-range anticipated, and on the accuracy required.
Radiometric dating has contributed immeasurably to our understanding of geological history and has contributed greatly to anthropological research. However, the technique requires exquisitely sensitive and accurate nuclear measurement equipment which usually only dedicated laboratories possess.
Carbon dating is often used when the age of organic matter needs to be measured over a timescale of about 60,000 years. The accuracy of carbon dating was compromised after the 1950s when the C-14/C-12 ratio in our atmosphere was changed due to nuclear weapon testing by the U.S., Russia, and China.
For measuring longer intervals of time in inorganic materials—rocks, minerals, or the earth itself--the decay rates of one radionuclide relative to an assumed fixed concentration of a second radionuclide having a much longer half-life are measured. Potassium-argon and uranium-lead dating are the most common examples of this method of dating. The former is used most frequently in geochronology and archaeology.
In 2009, the journal Archives of Internal Medicine carried a report that caused quite a stir in the media and among the medical community. The main conclusion of the paper was that the 57 million CT (computer assisted tomography) scans done in the U.S., in 2007, were expected to produce on a statistically predictive basis 14,500 future cancer deaths in the patient population scanned. How reliable are the data presented in this report and how logical are its conclusions?
Radiation Epidemiology of CT Scanning
The statement that 14,500 future cancer deaths may result from 57 million CT scans performed in one year in the U.S. at first seems terribly frightening, unless you recognize that diagnostic uses of radiation aren’t the only cause of cancer. There is a “baseline” cancer rate, due to various other factors, such as environmental carcinogens, man-made carcinogens (food, drugs, etc.), and natural background radiation.
This average baseline fatal cancer-rate in the U.S. and Europe is approximately 20%; i.e., 20% of the population will eventually die of cancer. Now the fatal cancers supposedly caused by CT scans, using the Archives of Internal Medicine data, constitutes an individual risk of about 0.025%; i.e., if exposed to such CT scans, one's average likelihood of contracting fatal cancer rises from 20% to 20.025%. Not quite so frightening any more.
Another way of looking at the predicted risk of death from cancer due to CT scans is to compare it with actuarial risks of death from other common human activities. For example: the risk of death from one typical CT scan—if we accept the Archives of Internal Medicine Journal’s conclusions for now and their estimate of 1 rem (10 msV) effective dose per CT scan-–is actuarially equivalent to the risk of death from smoking 220 cigarettes, drinking 360 bottles of wine, being exposed to air pollution by living in New York or Boston for 4 years, living for 14 years in Denver, or traveling 40,000 miles by car. From that perspective, the risk of death from fatal cancer contracted from a typical CT scan–-which is usually associated with a benefit in medical outcome–-again, doesn’t appear quite as threatening.
So, why the overplayed response by the media and efforts at damage control by the medical and medical physics communities? An important point of concern to radiologists and physicists—both experts on radiation effects associated with medical procedures-–is the statistical interpretation of the cancer deaths ostensibly caused by CT.
Most of the data linking radiation dose to fatal cancer come from observations of the effects of the Hiroshima and Nagasaki nuclear detonations in 1945, with additional data coming from studies of cancer in nuclear power plant workers in the U.K., patients exposed to x-ray fluoroscopy in U.S. and Canadian tuberculosis sanatoria between 1925 and 1954, and x-ray treatments of patients with ankylosing spondylitis (congenital fusion of the spine) in the United Kingdom. Typical radiation doses produced by CT scanning, however, are roughly 20-500 times lower than the lowest of the dose levels in the historical data referred to above. Using the historical data produced at substantially higher dose levels to extrapolate backwards to CT’s much lower dose levels, requires assumptions of the linearity of the dose-effect relationship over the very wide range of dose levels, so only shaky statistical estimates can be inferred for the corresponding associated cancer deaths at typical dose levels for CT.
The BEIR-VII Committee, the dominant U.S. scientific body dealing with human effects of radiation, recently stated that “at typical CT dose levels, statistical limitations make it difficult to evaluate cancer risk in humans”. This is committee-speak for “any estimates of cancer deaths caused by CT must be considered very shaky!”
To be fair, the Archives of Internal Medicine paper did include some discussion of inherent uncertainties in the data that were presented, but it failed to emphasize strongly enough the statistically shaky properties of those back-extrapolated data—exactly the point made by the BEIR-VII Committee. However, the media in reporting the paper’s conclusions omitted to mention the BEIR-VII report’s low confidence in the putative cancer deaths caused at the relatively low dose levels produced by CT. Finally, the paper’s authors underplayed, and the media reports failed to emphasize, the far larger number of patients who medically benefit from CT scans compared to those that would be statistically expected to die from cancer ostensibly caused by CT scans; that is, the very high benefit vs. risk ratio of CT was largely ignored in favor of focusing on the admittedly newsworthy aspects of the cancer deaths produced by CT.
So what is the takeaway message? Do not be frightened by the reported ostensible risks of CT, but make sure that you have CT scans done in a facility certified by the American College of Radiology, and one that follows its official and rigorous recommendations for minimizing CT dose. In addition, make sure your doctor explains to you clearly the potential benefit of a CT scan, and that a medically useful outcome will result.
According to a report in the Archives of Internal Medicine, 57 million CT scans done in the U.S. are expected to produce 14,500 future cancer deaths in the patient population scanned. The epidemiology of radiation effects is complicated and is subject to frequent misrepresentation. Although the projected number of deaths from the CT scans seems very frightening, they should be put into proper perspective. Given that the baseline fatal cancer rate in the U.S. and Europe is roughly 20%, the fatal cancer rate due to CT scans (if the report’s conclusions are to be believed) raises this number from 20% to 20.025%. If the conventional epidemiological radiation risk/benefit model is invoked, one typical CT scan is actuarially equivalent to dying from cancer by living in Boston or New York for 4 years (due to air pollution). Finally, epidemiologists agree that at the relatively low doses typical of CT, the corresponding risk estimates are fraught with extremely large errors. In fact, the BEIR-VII Committee, the dominant U.S. scientific body dealing with human effects of radiation, recently stated that “at typical CT dose levels, statistical limitations make it difficult to evaluate cancer risk in humans”.
In conclusion, the report of 14,500 projected cancer deaths due to CT scans done in the U.S in one year are based on extremely unreliable data and should be interpreted accordingly. Even if the reported cancer death rate from CT were true, percentage-wise this represents for a single individual an increase in risk from 20% to 20.025%; an increase that would most likely be buried in statistical "noise" and have very little meaning.
So do not be concerned by the reported statistical fatal cancer risks of CT, but make sure that you have CT scans done in a facility certified by the American College of Radiology, and one that procedurally follows the ACR’s official and rigorous recommendations for minimizing CT dose to patients. In addition, make sure your doctor clearly explains to you the potential benefit of the CT scan you are about to have, and what medically useful outcome might result.
The paper referred to in this article can be found in: Arch Intern Med. 2009 Dec 14;169(22):2071-7
About the Author