The bad science behind expensive nuclear
Length: • 25 mins
Annotated by Tejas
Tejas: Recent EOs by Trump have forced the NRC to reconsider one of the theories/hypothesis that is foundational to its regulatory framework, Linear No Threshold (LNT). > LNT states that no level of ionising radiation is safe and that its harmful effects are cumulative. The LNT hypothesis gives rise to ALARA regulations: as low as reasonably achievable. [Henry Muller](craftdocs://open?blockId=1164FC90-8EB7-49CD-A322-236D480193EA&spaceId=dd20cd40-789b-b4b4-9dc8-20d800f1b088) was the first person to suggest that the dosage of radiation was proportional to the amount of mutations produced in organisms. He used his Nobel Lecture to declare that there is no minimum threshold of radiation that doesn’t cause any damage to organisms. This gave birth to the LNT hypothesis. A lot of scientists used Henry Muller’s inertia to publish their own research proving how even very low dosages of radiation were harmful. While others published research claiming that there was a minimum threshold.
How a dubious theory of radiation damage based on fruit flies and a secretive weapons testing program came to be — and why its time may now be up.
On 23 May 2025, President Trump signed four executive orders on nuclear power, intended to speed up approvals of and reduce regulatory burdens on new nuclear reactors in America. Buried in one of them was a requirement that the Nuclear Regulatory Commission reconsider its use of ‘Linear No Threshold’ (or LNT). LNT is the hypothesis that the relationship between radiation dose and cancer risk to humans is linear and that there is no truly ‘safe’ level of radiation. It underpins nuclear regulation worldwide and it may be one of the most important rules that almost no one’s ever heard of.
In 2013, GE Hitachi Nuclear Energy, a joint venture between General Electric and Hitachi, applied to build three advanced boiling water reactors in Wales. Fission reactions would boil water into steam, turning a turbine, powering a generator, and producing electricity. This specific design had been employed in four Japanese reactors, which had survived earthquakes of a greater magnitude than have ever hit the UK without posing any threat to workers or the public.
Even though the reactor had a flawless safety record, the UK’s Office for Nuclear Regulation was not satisfied. Over the course of a four and a half year process, it demanded a series of design changes. These included the installation of expensive, bulky filters on every heating, ventilation, and air conditioning duct in the reactor and turbine building, a new floorplan for the room in the plant’s facility that housed the filtration systems, and an entirely new layout for the facility’s ventilation ducts. The purpose of these changes was to reduce radiation discharges from the filter by 0.0001 millisieverts per year. This is the amount a human ingests when they consume a banana.
A CT scan hits a patient with ten millisieverts all in one go. Natural background radiation in the UK or US typically exposes people to two or three millisieverts during the course of a year, and exceeds seven millisieverts per year in Iowa and North Dakota and South Dakota. A single flight from New York to London exposes a passenger to 0.04–0.08 millisieverts; 0.0001 millisieverts is equivalent to 1/400 of the upper range of that, or about 72 seconds in the air per year worth of radiation.
The regulatory ratchet that makes nuclear unaffordable can be summarized in a single acronym: ALARA. This is the internationally accepted principle that exposure to ionizing radiation – the kinds of radiation produced by x-rays, CT scans, and the radioactive isotopes of elements used in nuclear power plants – should be kept ‘as low as reasonably achievable’. ALARA has been interpreted in major economies like the US, UK, and Germany as meaning that regulators can force nuclear operators to implement any safety improvement, no matter how infinitesimal the public health benefit, provided it meets an ambiguous proportionality standard.
ALARA stems from the Linear No Threshold hypothesis, the theory about how the body responds to radiation that May’s Executive Order took on. Critically, the hypothesis holds that any amount of ionizing radiation increases cancer risk, and that the harm is cumulative, meaning that multiple small doses over time carry the same risk as a single large dose of the same total magnitude.
In other areas of our lives, this assumption would seem obviously wrong. For example, the cumulative harm model applied to alcohol would say that drinking a glass of wine once a day for a hundred days is equivalent to drinking one hundred glasses of wine in a single day. Or that a jogger who ran a mile a day for a month was putting her body under greater strain than one who ran a marathon in a day. We recognise that the human body is capable of repairing damage and stress done to it over time.
But the Linear No Threshold assumption is the orthodoxy in international radiation protection, and its implications in ALARA regulations are among the most significant contributors to nuclear energy’s unaffordability in most of the developed world. But these assumptions are not just counterintuitive: they may be unscientific.
The making of LNT
In 1927, Herman Muller, a researcher at Columbia University, published a breakthrough finding on the connection between radiation and genetic changes: fruit fly sperm cells treated with X-rays had a 15,000 percent higher mutation rate than untreated controls. These mutations were stable, heritable, and frequently lethal.
Muller became famous overnight. Researchers began to find similar results in maize, mice, and other organisms. Despite his good fortune, the Great Depression hit his lab hard. Muller moved from the US to Germany in 1932 and then to the USSR a year later, where the government funded his lab generously. Among the friendships he made during this trip was one with Soviet biologist Nikolai Vladimirovich Timofeeff-Ressovsky.
In 1930, Muller had observed that ‘the frequency of mutations produced is exactly proportional to the energy of the dosage absorbed’, but he had not formally turned it into a dose-response model.
In 1935, Timofeeff-Ressovsky, in collaboration with the German radiobiologist Karl Zimmer and German-American physicist Max Delbrück, released research reaffirming that x-ray induced mutations in Drosophila are directly proportional to radiation dose. They extended the theory by arguing that mutations could result from a single blast of radiation, which would come to be known as ‘hit theory’.
Muller was a strong believer in the power of science to effect social change. In his case, this meant a twin passion for eugenics and socialism. In a 1936 letter to Stalin, he would describe himself as ‘a scientist with confidence in the ultimate Bolshevik triumph’, who believed that ‘human nature is not immutable, or incapable of improvement’. But his stay in the Soviet Union was not a happy one. The rise of Lysenkoism, the pseudo-scientific Soviet alternative to genetics, would result in his eventual return to the US in 1940.
The atom bombs dropped on Hiroshima and Nagasaki catapulted radiation to the top of the agenda, and Muller was awarded the 1946 Nobel Prize in Medicine. He used his lecture to cite Timofeeff-Ressovsky approvingly and declare that there is ‘no escape from the conclusion that there is no threshold dose’. The Linear No Threshold hypothesis had been born.

Muller’s work was highly influential and would go on to play an outsized role in the regulation of radiation.
But not everyone was as convinced by its implications as he was, even at the time. Robley D Evans had emerged in the 1930s as one of the world’s first experts on the impact of radiation on human health. Though a believer in the potential harms of radiation exposure, he rejected the LNT model that Muller was popularising.
In 1949, Evans published a paper that attempted to extrapolate the findings from studies on fruit flies, mice, and plants to humans, accounting for the biological differences. He found that even at a radiation dose of 2.5 röntgen per day for 21 days – roughly 25 millisieverts, equivalent to two and a half CT scans – some organisms did not show any increase in mutations at all.
Regulations at the time limited radiologists to 0.1 röntgen of exposure a day, after higher rates of cancer and illness had been observed in the profession. Since 2.5 röntgen significantly exceeded these levels, Evans concluded that it is ‘highly improbable that any detectable increase in hereditary abnormalities will result’ from the low levels of exposure they faced.
Muller was unimpressed. He sent Evans a long letter full of criticisms, which Evans derided as containing ‘a few points of scientific interest, and many matters regarding personalities and prejudices’. In Muller’s view, Evans was backed up by radiologists plus figures who had a vested interest in minimising the dangers of radiation due to their association with America’s gung ho nuclear regulator, the Atomic Energy Commission (AEC).
Initially, none of these abstruse debates about fruit flies seemed to matter. Popular attitudes to radiation were cavalier and many physicians believed that radiologists were just disproportionately weak or sickly. X-rays were used routinely in shoe-fittings and for hair removal, radium-enhanced tonic was sold for medicinal purposes, and radium was routinely infused in cosmetics. American consumers could buy radium-infused handwash with the ominous slogan of ‘takes everything off but the skin’.
When the first nuclear power stations came online in the US in 1957, there were rules around radiation exposure for plant workers, but nothing governing background radiation around facilities. The civilian application of nuclear energy was initially uncontentious, but optimism would rapidly drain away. The US government, the technology’s biggest champion, would soon prove to be a liability. Nuclear energy would be crippled by events with only a tangential connection to the industry.
With friends like these
Over the course of the 1950s, the US conducted well in excess of 100 nuclear weapons tests, either in Nevada or in sites dotted around the Pacific Ocean. This was overseen by the AEC, which was in the odd position of both regulating civilian nuclear power and running atomic weapons testing. It was both the nuclear industry’s main promoter in the US and its regulator.
In 1953, fallout from a test in Nevada led to a number of local sheep falling ill and then dying. Then in March 1954, a test at Bikini Atoll in the Marshall Islands went seriously wrong. Castle Bravo was (and remains) the most powerful nuclear device that the US ever tested, roughly 1,000 times more powerful than the atomic bomb dropped on Hiroshima. Not only did it produce more fallout than anticipated, but a sudden shift in wind speed and direction caused the fallout to spread significantly further than intended, raining down on nearby islands.
The small population of Rongelap was the worst hit. Located 110 miles from the test site, the nuclear fallout looked like snow, leading children to play with it, while much of the population ignored it and went about their daily business. The population was hit with 2,000 millisieverts of radiation over three days, significantly more than many Hiroshima and Nagasaki survivors. While there were no fatalities, significant numbers of people developed skin lesions and alopecia, while leukemia and thyroid cancer rates remain elevated among this population. The US Government evacuated a number of islands in the days after the blast, while Rongelap remains uninhabited after a failed return effort.

Less than a week later, the Associated Press revealed that the crew of the Japanese fishing vessel Lucky Dragon had suffered skin discoloration, hair loss, and nausea. During their voyage home from trawling 100 miles east of Bikini, their eyes and ears had leaked yellow pus. Panic ensued after it transpired that part of the crew’s cargo of tuna and shark meat had been sold before the danger was apparent. Fish prices collapsed and panic spread across the US and Japan as the authorities searched for contaminated fish.
There would be other fears. From the late 1950s onwards, public fears rose about elevated levels of strontium-90, an isotope produced by nuclear fission, in milk. The levels were never high enough to come close to causing harm, but a panic about children catching bone cancer and leukemia nevertheless spread. In 1956, Democratic presidential candidate Adlai Stevenson proposed a unilateral ban on hydrogen bomb testing to protect Americans from the effects of fallout.
At every turn, the AEC’s instinct was to play down these incidents and to avoid discussing fallout. The full scale of livestock contamination in Nevada would not emerge for decades, after the AEC allegedly altered scientists’ reports to change the causes of death for the animals. Meanwhile, AEC Chairman Lewis Strauss wrongly claimed that Lucky Dragon had been sailing inside the restricted test area, while suggesting that the crew’s injuries ‘are thought to be due to the chemical activity of the converted material in the coral rather than to radioactivity’.
When it came to the Marshall Islanders who had been evacuated, the AEC wrongly implied in its public statements that none of them had suffered real side effects; its acknowledgement that they were exposed to ‘some radioactivity’ scarcely conveyed the levels of radiation that they had encountered.
The AEC’s evasiveness troubled the public, but more importantly, it began to radicalise a section of the scientific community. Geneticists particularly bridled against the AEC’s attempts to push news articles downplaying the health risks of radiation, as well as their attempts to steer the scientific conversation. In a move almost perfectly calibrated to drive ill-feeling in the community, the AEC used its influence to bar Muller from delivering a paper on radiation-induced mutation at the UN’s 1955 Geneva Conference on Peaceful Uses of Atomic Energy.
Against this backdrop, the US National Academy of Sciences convened a committee to assess the Biological Effects of Atomic Radiation (BEAR) in 1955. The defining feature of BEAR I was its disharmony. The committee was split into separate panels of geneticists and pathologists, whose main activity became feuding with each other. The geneticists, led by Muller, pushed hard for LNT. The pathologists, however, were not believers. Sceptical of attempts to extrapolate to humans from fruit flies, the pathologists believed the geneticists had an overly simplistic view of how diseases developed.
Both panels’ reports were published, along with a compromise summary and set of recommendations. The summary concluded that ‘except for some tragic accidents affecting small numbers of people, the biological damage from peacetime activities (including the testing of atomic weapons) has been essentially negligible’. However, critically, it also noted that ‘there is no minimum amount of radiation which must be exceeded before mutations can occur’ and ‘the harm is cumulative’, meaning ‘the genetic damage done by radiation builds up as the radiation is received’. This point is critical – it implies that the body has no way to recover from radiation damage. In essence, receiving a huge blast of radiation suddenly is no worse than receiving small doses gradually over time.
The report recommended reducing the maximum lifetime cumulative radiation exposure to reproductive cells from 300 down to 50 röntgen (from approximately 300 CT scans to approximately 50), and limiting the total exposure received by a member of the public up to age 30 to ten röntgen.
Media coverage of BEAR I was as nuanced as you’d expect. The front page of the New York Times on 13 June 1956 screamed ‘Scientists term radiation a peril to future of man’, with the subhead ‘even small dose can prove harmful to descendants of victim, report states’. The AEC was berated in the media for having misled the public about the existence of a safe threshold.

Things were going to get worse for nuclear power and the AEC. By the end of the decade, ionizing radiation was under political and scientific siege.
The war on radiation
In 1957 Chet Holifield, who chaired the congressional Joint Committee on Atomic Energy, complained that he had to ‘squeeze the [fallout] information out of the Agency’ and accused the AEC of having a ‘party line’ of ‘play it down’, asking ‘is it prudent to ask the same agency to both develop bombs and evaluate the risk of fallout?’. International developments were also unhelpful. A 1958 UN report, which had drawn heavily on the work of American scientists, strongly supported LNT.
Another angle of attack opened up in medicine. By the 1950s, x-ray equipment had become widely used in hospitals and most pregnant women in the UK and US received an x-ray at least once during their pregnancy. Between the mid-1930s and mid-1950s, deaths from childhood leukemia doubled in England and Wales. Alice Stewart, an Oxford epidemiologist, doubted the prevailing view that this stemmed from a combination of industrial pollutants and better diagnosis. In 1958, she published an article in the British Medical Journal, presenting survey data showing that children who had been x-rayed in utero were twice as likely to die by the age of nine. While Stewart’s work was met with skepticism, a 1962 US study found that childhood cancer mortality was 40 percent higher among x-rayed children.
The 1950s also saw the birth of modern cytogenetics, the study of chromosomes. Thanks to improved staining and microscopy techniques, scientists finally determined the number of human chromosomes accurately at 46. Scientists established the first links between chromosomal abnormalities and conditions like Down’s, Turner syndrome, and Klinefelter syndrome. They quickly took an interest in how different radiation doses impacted chromosomes. Michael Bender, also of Cold Spring Harbor, established that x-rays could induce chromosome aberrations in human tissue cultures in 1957.
Five years later, along with his colleague PC Gooch, Bender took blood samples from volunteers, exposed them to different x-ray doses, and then examined the chromosomes during cell division. Not only did they find that the x-rays caused identifiable chromosome damage, they could predict the amount of damage based on the dose. They found damage at the lowest dose they measured, 50 röntgen, the radiation dose you’d expect from 50 CT scans.
It’s around this time that the seeds of ALARA – the goal of reducing background radiation from nuclear reactors to a level ‘as low as reasonably achievable’ – were sown. The principle combines the Linear No Threshold view that all ionizing radiation causes harm to humans with the view that it is never worth trading off some health costs against other benefits.
In a 1959 publication, the International Commission on Radiological Protection swung in a much more conservative direction. Historically, it had been believed there was a safe threshold, while the long-term genetic effects of radiation sat outside the expertise of most of its membership. However, it now recommended that radiation exposure be kept as ‘as low a level as practicable, with due regard to the necessity of providing additional sources of energy to meet the demands of modern society’.
Petrol was thrown on the fire in 1969, when John Gofman and Arthur Tamplin, two scientists at Lawrence Livermore National Laboratory, started publishing inflammatory claims about radiation exposure and cancer risk. Gofman and Tamplin claimed that if the entire US population were exposed to the Federal Radiation Council and AEC’s safe radiation limits from birth to age 30, it would result in 16,000 additional cancer cases a year. They subsequently revised this number to 32,000. As a result, they believed that man-made radiation exposure limits needed to be cut from 1.7 millisieverts a year to 0.17.
Gofman and Tamplin’s work was significant because of the radiation levels that they attacked. Much of the work discussed above, from Hermann Muller onwards, used levels of radiation a factor of tens, hundreds, or even millions of times greater than natural levels of background radiation – the sorts of levels usually seen only at nuclear weapons test sites or by x-ray technicians exposed to radiation every single day. This was understandable, given that radiation safety began in the worlds of medicine and nuclear weapons testing. It also reflected the statistical challenges of measuring the effects of very low doses. But it also tells us relatively little about nuclear power; Bender and Gooch’s ‘low’ dose is four times higher than the average dose received by the recovery staff who worked on the Chernobyl accident site.
Gofman and Tamplin’s work was met with skepticism by their peers and was initially ignored. But after Gofman testified before the Senate Subcommittee on Air and Water Pollution and then the Joint Committee on Atomic Energy, the ensuing public fall-out led to Robert H Finch, the Secretary of Health, Education and Welfare to establish the Committee on the Biological Effects of Ionizing Radiation (BEIR), which would produce its first report in 1972.
This report reaffirmed LNT, but marked an important shift. BEIR I and II had emphasised genetic risks heavily, but the descendants of Hiroshima and Nagasaki survivors were simply not displaying signs of the genetic damage at the rates the geneticists’ modelling on mice or fruit flies suggested they should. In fact, there was no statistically significant increase in birth defects, stillbirths, survival rates, or chromosomal abnormalities versus control groups, either at initial observations in the 1950s or after subsequent follow ups. Anyone more than about 1,800 metres from the point on the ground directly below the blast did not experience heightened rates of cancer at all.
BEIR I started a trend, followed by subsequent BEIR reports, of focusing significantly more on the risk of cancer, rather than genetic damage. BEIR I didn’t take a position on the shape of the dose-response curve, but affirmed that even very low radiation doses could have carcinogenic effects.
The end of nuclear’s golden age
By the end of the 1960s, it was clear that the AEC was living on borrowed time and, along with it, the golden age of the US nuclear industry.
A big change was the growth of environmental consciousness. This had found an unlikely champion in Richard Nixon, who signed the National Environmental Policy Act into law in 1970. This required federal agencies to prepare environmental assessments and impact statements to evaluate their decisions. The AEC was not willing to kowtow and attempted to interpret these rules as narrowly as possible, resulting in a 1971 legal defeat over a planned nuclear plant on Chesapeake Bay. This forced the AEC to suspend new plant licensing for 18 months while it updated its rules.

It then endured a series of brutal congressional hearings over the course of 1972–73, in which independent experts and internal whistleblowers criticised its approach to regulating the safety and reliability of emergency core cooling systems in nuclear reactors. Witness after witness took the opportunity to attack the AEC for its lack of transparency and for allegedly rushing approvals.
In 1974, the Government decided that it had seen enough and abolished the AEC through the Energy Reorganization Act. In its place, the Nuclear Regulatory Commission (NRC) was established to regulate civilian nuclear activities, while the Energy Research and Development Administration managed weapons research.
The NRC’s institutional culture was markedly different to that of its predecessor. It very much saw itself as a regulator, not an advocate or an enabler. The AEC had already started to ramp up regulation in response to public and political pressure, but the NRC accelerated this trend. It formally adopted ALARA in 1975. This meant that the NRC would not issue a construction or operating licence until the applicant showed that further shielding or processing equipment would not cost unreasonably more than it saved. Inspectors would no longer simply assess whether facilities stayed below dose limits, but on how aggressively they drove doses lower year-to-year.
The combination of tougher radiation safety standards and new environmental rules caused the costs of nuclear power to spiral in this period. This can clearly be seen in individual projects. New radiation shielding, extra instrumentation, and the relocation of control systems to reduce exposure risk drove up materials bills. The amount of cabling required for a nuclear project in the US jumped from 670,000 yards to 1.3 million between 1973 and 1980, while cubic yards of concrete increased from 90,000 to 162,000. The number of man hours per kilowatt hour of energy generated surged from 9.6 in 1972 to 28.5 in 1980. The Sequoyah Nuclear Plant in Tennessee, scheduled for completion in 1973 at a cost of $300 million was completed for $1.7 billion in 1981, after 23 changes to structure or components were requested by the regulator.
By 1980 the previous decade’s regulatory changes had driven a 176 percent increase in plant cost. New safety rules had resulted in greater complexity, in turn driving up the materials bill and engineering costs.
The number of regulatory guides began to climb, and projects would take longer to complete, resulting in higher financing costs. A 1979 Congressional Budget Office study found that a one-month delay in the construction of a new reactor would cost an extra $44 million (in 2025 terms), with half this total coming from interest. The Public Service Company of New Hampshire, the builders of the prospective Seabrook Station went bankrupt in 1988, after regulatory delays resulted in one unit being completed 14 years after its construction permit was issued and the other being cancelled. It is not surprising that a 1982 Department for Energy report found that utilities companies with a huge percentage of their electricity generated by nuclear power tended to have lower bond ratings, even after controlling for earnings and state regulatory quality.


The notorious Three Mile Island accident in 1979, when a reactor in Pennsylvania released radioactive gases and iodine into the environment after a partial meltdown, would worsen the political backlash against nuclear energy. No credible research has found evidence that the accident impacted the health of anyone in the surrounding area. However, the regulatory damage had already been done.
Thanks to its leadership position in the field, debates around radiation science in the US played an outsized role in shaping global standards. In 1977, the International Commission on Radiological Protection adopted its three fundamental pillars of radiation protection that remain in effect to this day: justification, optimisation, and dose limitation. In practice, these pillars mean that any introduction of a new source of radiation, like a new reactor, must first be shown to have a net overall benefit to society, and that all new doses of radiation received by workers and members of the public should be as low as reasonably achievable.
Governments around the world were adopting ALARA too. Britain was one enthusiastic example. The Health and Safety at Work Act, passed in 1974, adopted a subtly modified formulation of ‘as low as reasonably practicable’. For exposure to be considered ALARP, the regulator can require the inclusion of any measure that it does not rule to be ‘grossly disproportionate’. The prospective licensee has to proactively challenge any requested change, which rarely happens in practice, in part because they would be suing the regulator they are reliant on to give them a license to operate.
The European Atomic Energy Community, founded by Belgium, France, Germany, Italy, Luxembourg, and the Netherlands to create a market for nuclear power, adopted ALARA in 1980, but its application across Europe was uneven.
In the 1960s, French nuclear approvals had been determined by a small group of engineers, safety experts, and military scientists in secrecy, in a process dubbed ‘French cooking’ by Anglo-American observers. This process of ‘technical dialog’ relied on non-binding individual ministerial directives, safety reports, and guides. This was designed to allow rapid construction, while the flexibility was designed to help France’s ambition to become an exporter of nuclear technology. In fact, France didn’t have codified technical regulations for nuclear safety until the end of the 1970s.
The system gradually became more formalised and transparent over the course of the 1980s, but the French government largely resisted the regulatory ratchet until the Fukushima disaster in 2011. While this era’s approach would fly in the face of today’s norms around transparency and conflicts of interest, the vast majority of France’s operating nuclear reactors were built under this system during the 1970s and early 1980s. Today, nuclear power generates around two-thirds of France’s electricity, making it the most nuclearized country on earth.

By contrast, West Germany pursued aggressive safety standards and designed a legal framework with significant scope for public consultation and judicial review. In 1981, experts estimated that this was delaying close to $53 billion in nuclear investment (in 2025 dollars).
The end of the 1970s oil shock, a global collapse in coal prices, and a flatlining in energy demand in most developed countries from the early 1970s all contributed to making nuclear a significantly less attractive commercial proposition. The number of new nuclear reactor projects collapsed, and many that were under construction were cancelled.

The breaking of LNT
The science of radiation safety did not stop in the 1970s. Even as LNT was becoming the regulatory consensus, its scientific basis was beginning to unravel.
If the human body has ways of healing itself, then lower doses over a sustained period of time seem unlikely to have the same effect as the same size dose taken at once. The body will use the time to repair the damage caused by the low doses that it does not have when it experiences the high dose.
Scientists at Cold Harbor Spring Laboratory found in 1949 that bacteria could repair damage caused by ultraviolet light, once they were exposed to normal light. Even before this, scientists had assumed that cells must have some way to repair damage from radiation, given the amount of naturally occurring background radiation the earth is exposed to from things like the sun, cosmic rays, and minerals in the earth like radon and uranium.
Watson and Crick’s work on DNA in the 1950s showed that it had a double-helix structure. This allows it to repair one damaged strand using information encoded in the other strand. When UV light or chemicals damage DNA, special proteins locate the damaged section, cut it out, and then fill in the gap with the correct sequence.
The 1960s and early 1970s saw a series of research breakthroughs that showed processes like this happening for fixing both small-scale damage and larger, more disruptive lesions. But they did not immediately reject the idea that ionizing radiation could be repaired: ionizing radiation could cause both strands to break.
The idea of double-strand break repair would not be proposed until the 1980s, after initially promising experiments in yeast led to its exploration in mammalian cells. DNA repair, including double-strand break repair, is now universally accepted. It appears in foundational molecular biology textbooks, while the 2015 Nobel Prize in Chemistry went to three researchers for their study of DNA repair.
We can test LNT at the epidemiological level as well. If there is truly no threshold, we should expect to see higher incidences of cancer among populations that have endured prolonged radiation exposure. But study after study has failed to find this. In 1991, the US Department of Energy commissioned Johns Hopkins University to study the health records of 70,000 nuclear shipyard workers, comparing workers in radiation-exposed areas with workers in other areas. It found no evidence of adverse effects. Johns Hopkins repeated the same study in 2022 with a bigger dataset, looking at over 372,000 shipyard workers from 1957 to 2011. Beyond some asbestos-related illness from early years of the study, they found no evidence of heightened cancer risk in workers working in radiation-exposed areas.
Of course, nuclear shipyard workers could well be fitter and less susceptible to illness than the average member of the public. But some unfortunate 1980s construction in Taiwan provides us with clues. Over the course of 1982–87, steel that had been contaminated with cobalt-60 was recycled into construction materials for flats. Over two decades, 10,000 people occupied these buildings, receiving an average total radiation dose of 400 millisieverts, about twenty times more than the average American would receive over that period. A study found that they suffered lower incidences of cancer death than the general population.
We can also look at regions with high natural background radiation. Thanks to thorium-rich mineral deposits, Kerala in southern India has some of the highest levels of background radiation in the world. In some areas, radiation levels reach up to 30-35 millisieverts per year, compared to the worldwide average natural background radiation of about 2.4 millisieverts per year. Again, no excess cancer risk has been found.
The scientific advisory panels that birthed LNT, like BEIR, have not modified their positions in their most recent reports, but they have acknowledged that considerable uncertainty exists at lower doses. The International Commission on Radiological Protection, for example, has underscored LNT’s total lack of predictive power, warning in 2007 that: ‘Because of this uncertainty on health effects at low doses, the Commission judges that it is not appropriate, for the purposes of public health planning, to calculate the hypothetical number of cases of cancer or heritable disease that might be associated with very small radiation doses received by large numbers of people over very long periods of time.’ Despite this, the ICRP continues to recommend its use, while acknowledging that ‘existence of a low-dose threshold does not seem to be unlikely for radiation-related cancers of certain tissues’. It now uses an adjusted model that assumes radiation delivered at a lower dose rate is half as harmful as the same total dose delivered at a higher rate. This is meant to account for the body’s natural repair processes.
So far, the French Academy of Sciences and National Academy of Medicine remain the only national-level scientific bodies to have recommended abandoning the orthodoxy. In a 2005 joint report, it expressed ‘doubts on the validity of using LNT for evaluating the carcinogenic risk of low doses (<100 mSv) and even more for very low doses (<10 mSv) ... Decision makers confronted with problems of radioactive waste or risk of contamination, should re-examine the methodology used for evaluation of risks associated with very low doses and with doses delivered at a very low dose rate.’
LNT believers did, however, seemingly catch a break in 2015 with the publication of INWORKS, which studied cancer mortality after low dose exposure across 300,000 radiation workers across France, the US, and the UK countries. This was then updated in 2023. INWORKS concluded that there was indeed evidence of a linear association between cumulative radiation dose and the risk of developing solid cancers. It also found that the risks of radiation-induced cancer mortality may be higher than previously reported.
The study, however, contains a number of methodological quirks that render the headline findings suspect. In the INWORKS study, background radiation is subtracted from workers’ dosimeter readings, even though for most participants, background exposure far exceeds their occupational dose. This results in misleading comparisons. For example, a Rocky Flats worker exposed to five milligrays (roughly equivalent to the same number in millisieverts) per year of background radiation is treated as equivalent to a Hanford worker receiving only one milligray per year, despite large differences in total radiation exposure.
INWORKS uses a control group of workers who received 0–5 milligrays to avoid the health worker effect we warned about in the John Hopkins study. However, this introduces a different bias: workers in this group often hold desk jobs and tend to have higher education, income, and healthier lifestyle habits than blue-collar workers. This explains some of the bizarre results elsewhere in the study. The next dose up from the control group, which received a negligible 5–10 milligrays (that is, less than 0.2 milligrays per year), saw a six percent increase in cancer risk. This amounts to an 850 percent increase in cancer risk for every gray of radiation. Yet, from 10 to 300 milligrays, no further increase in cancer is observed. This indicates that the sharp jump is likely due to confounding socioeconomic factors, not radiation.
The triumph of inertia
Throwing out decades of orthodoxy on radiation safety would be controversial and result in considerable bureaucratic inconvenience. Meanwhile, LNT defenders have certain forces on their side.
For a start, it will always be possible to label evidence about low-dose radiation as highly uncertain. While this logic should cut both ways, in practice, it creates a huge bias in the incumbents’ favour.
Scientists who believe in the existence of a safe threshold have the unenviable task of essentially proving a negative, definitively showing that no effect exists below a certain dose. Meanwhile, LNT advocates have a simple model that can always be defended using the precautionary principle.
These practical challenges make LNT borderline unfalsifiable. Whether its statistical limitations, the challenge of controlling for other factors, or difficulties in follow-up, it will always be possible to find a reason to dismiss any single study that contradicts it.
While LNT is very conservative, incumbents are reluctant to challenge it. The clear regulatory line in the sand allows nuclear operators and developers to constrain tort judgments in the event that workers fall ill. Many incumbents are willing to pay the price of highly conservative exposure limits. In the UK, for example, EDF restricts worker radiation exposure to 10 millisieverts a year, which is half the statutory dose limit, and public exposure to 0.3 millisieverts, a fraction of the already negligible one millisievert limit.
Even the Trump Administration’s May 2025 Executive Order does not go beyond asking the NRC to ‘reconsider’ LNT and ALARA, describing LNT as ‘flawed’. As Jack Devanney, one of the most prolific and prominent critics of LNT and ALARA today, has pointed out, the NRC has already been asked to ‘reconsider’ LNT three times, most recently in 2019. ‘The NRC,’ he says, ‘pondered the issue for three years before proclaiming to no one’s surprise that it was sticking with LNT.’ The Administration would be on safe ground legally if it took a more assertive stance: Congress did not mandate it, or ALARA – the NRC adopted them itself.
Meanwhile, the costs of ALARA only continue to stack up. The case of the banana-like levels of radiation exposure is just one example. Tens of billions of dollars are added to the lifetime costs of nuclear projects to bury waste underground in deep geological repositories, facilities 200 to 1,000 metres below the surface of the earth, on safety grounds. There have been no fatalities in the history of international nuclear energy waste management.
In nuclear facilities, some regulators will expect operators to prepare for double-ended guillotine breaks in piping. This is the assumption that a pipe could completely sever, causing the broken ends to ‘whip’ with intense force, causing significant damage to all of the equipment around it. This is, in fact, an unrealistic assumption. Decades of operating experience and research indicate that pipes typically develop detectable leaks long before catastrophic failure. As a result, operators have to install extensive restraint systems that add to the maintenance burden. The US has started to ease these restraint requirements, but the UK has not.
While the nuclear industry can shift national regulators on individual requirements by attrition, this seems like a bad way of incentivising long-term investment in critical infrastructure. It seems highly unlikely that if we were starting out from scratch, we would end up with a radiation safety regime built on LNT. As the urgency of the energy transition is brought into sharp relief, governments are responding with one hand tied behind their back – even an Administration not otherwise known for its reticence.
Alex Chalmers is an Editor at Works in Progress, focused on AI and energy. He also writes Chalmermagne, a Substack focused on technology, finance, and policy. You can follow him on Twitter here.
Illustration by Vincent Kings.
Jack Devanney, author of Why Nuclear Has Been a Flop, deserves significant credit for his analysis and popularisation of the flaws in ALARA and LNT. You can read more of his work here.