Britain gave an elite group of engineers sweeping power and massive resources to deliver a nuclear power revolution. But their nuclear dreams crumbled.
On 17th October 1956, Queen Elizabeth II pressed the switch that activated Calder Hall, the world’s first grid-scale nuclear power station. As the country attempted to reassert world leadership after the trauma of World War II, The Times’s correspondent excitedly recalled that:
Today, with a boisterous wind to display the flags – and nearly wreck the marquees – the colourful and almost Wellsian-looking installation deeply stirs the imagination. Nothing like it exists elsewhere. Truly it has been described as a ‘courageous enterprise’; for Calder Hall represents the inauguration of a comprehensive programme of atomic power stations which, in time, will provide Britain with an ample supply of electricity without the use of coal or oil. Therein lies its magic.
By 1965, Britain had built more nuclear power stations than the US, USSR, and France combined. Stations were approved in months, regularly built in under five years, and operated with an impressive safety record. But in 1970, the UK’s lead faltered and has never recovered.

Subscribe for $100 to receive six beautiful issues per year.
Hinkley Point C, under construction in the southwest of England, is set to be the most expensive nuclear power station ever built anywhere in the world. Each unit of power generated will cost six times more than one produced by a modern South Korean nuclear power station or an early British reactor. By the time the plant comes online, it will have taken at least 13 years to build and will have run £17 billion over budget. Britain last successfully completed a nuclear reactor in 1995.

Most accounts emphasize the importance of technology. The story goes that the UK’s 1965 decision to embrace a convoluted domestic reactor design created a series of white elephants that rendered nuclear power uneconomic in Britain. After privatization, nuclear power was written off by the markets, and the British nuclear industrial and skills base left to shrivel. This view was also widely held at the time. As early as 1972, Sir Arthur Hawkins, the chair of the UK’s powerful state-run energy company and regulator, dubbed the choice ‘an economic catastrophe we must not repeat’.
There is a grain of truth in this story, but it is incomplete. The tide turned against nuclear only after Britain had switched over to proven American technology in the 1980s. Meanwhile, even the clunkiest British reactors were made to operate efficiently once they were placed in the hands of more professional management in the 1990s.
Instead, the story of British nuclear power more closely resembles the story of the British state in the postwar period. At first, Britons appeared to be living in the age of the technocrat. Talented scientists and engineers armed with a near carte blanche from the government to build with minimal consultation seemed unstoppable. But somewhere along the way, this model broke down. It didn’t take incentives seriously, underdelivered, and failed to adapt to changing public expectations. Experts failed, and the public withdrew the blank check they had given them. The story also serves as a valuable corrective to the idea that the world would work better if we simply handed more power to engineers.
The first wave
In October 1946, in a bad-tempered cabinet committee meeting, British Foreign Secretary Ernest Bevin thundered: ‘We’ve got to have this thing over here whatever it costs. We’ve got to have the bloody Union Jack on top of it’. Bevin was infuriated by the American decision to cut off all nuclear cooperation with its wartime allies, including those which had helped it develop the atomic bomb. Casting aside Treasury reservations, the government concluded that if it could not share a nuclear deterrent with the Americans, the empire on which the sun never set must have its own.
To arm the bomb, Britain needed plutonium. Drawing on the wartime memory of British scientists who had worked at Los Alamos, the UK built the Windscale Piles. These reactors, named because they were stacks of graphite blocks, would serve as the inspiration for the first civilian UK reactor: the Magnox.
A Magnox reactor packed unenriched metallic uranium fuel rods into a graphite core. The design was named after the magnesium alloy cladding that stopped the uranium fuel rods from reacting chemically with the gas. If you pulled out the fuel rods quickly enough, the neutron reactions taking place in the reactor converted the uranium into weapons-grade plutonium, but if left alone, it also generated electricity.
The Magnox program was born in 1953, when the government gave the go-ahead for Calder Hall. In response to the growing secret cost of the British nuclear weapons program and a desire for greater energy security, the government decided to find a civilian use for the technology.
The most striking element of the Magnox program was its speed: between 1956 and 1971, Britain built 26 reactors. Projects were routinely approved within months, and reactors typically achieved grid connection within four to five years of starting construction. This is all the more impressive given that there was no standard Magnox. The Atomic Energy Authority played a significant role in coordinating research and design, as well as in training consortium staff, but as there was no British company with the skills and capacity to build an entire nuclear power station, the plants were built by competing consortia, each to slightly different designs.
The approvals process was straightforward. The Central Electricity Authority, the body responsible for managing Britain’s nationalized energy supply, conducted site studies and consulted local authorities to identify suitable locations. This consultation would factor in space, proximity to a water source, terrain, and existing development plans. Once the authority had selected a site, the minister would publish a notice in The London Gazette and in the local press, as well as notifying anyone who owned or leased land within 300 yards of the site. They could then submit a letter of objection to the Ministry of Power.
If they received significant objections from either locals or a public body, the minister would hold a local planning inquiry. Before 1957, there was no legal requirement to do so, but ministers held them anyway as a means of avoiding local discontent.
The early nuclear planning inquiries were sedate affairs. In 1958, the minister ordered an inquiry into the prospective Trawsfynydd nuclear power station in Snowdonia. The local authority and community were both supportive, but the National Parks Commission was concerned about the impact that the station would have on the scenery. Over three days, one inspector from the Ministry of Power and another from the Ministry of Housing and Local Government heard representations from both sides. Their final report, including appendices, ran to just 50 pages. Just the environmental assessment for Hinkley Point C ran for over 44,000 pages.
After the inquiry, the minister would give consent to build the power station under the 1909 Electric Lighting Act, the standard legal mechanism for approving new electricity generation at the time. The minister would then invoke section 35 of the 1947 Town and Country Planning Act, which allowed the government to grant planning permission for public developments without involving the local authority.
This straightforward approach reflected broad public support. For example, while conservation groups were concerned about the damage Trawsfynydd might do to the landscape in Snowdonia, the local community welcomed the power station. Merionethshire, the county that hosted the site, had lost 20 percent of its population since 1880 due to economic stagnation. Former residents wrote letters expressing their hope of moving back if the power station created new jobs.
This system needed refinement. These early nuclear power stations were run by the UK Atomic Energy Authority, a government body that enjoyed immunity from civil liability and most regulation. The government’s ambition was that as nuclear scaled, it should be treated like any other energy source, rather than a special technology to be built by a secretive government body.
Against this backdrop, the UK introduced its first licensing system for nuclear power in 1959 by legislation. The Minister of Power would be responsible for issuing licenses and setting conditions. The government also created the Nuclear Installations Inspectorate within the ministry, initially with a team of just 13 inspectors. By comparison, the modern Office for Nuclear Regulation has over 400 specialist technical staff.
In 1960, the ministry granted the first prospective license under the new scheme to Bradwell, a Magnox station in Essex. The license was a modest affair with just seven conditions attached to it. These were light touch by modern standards and included obtaining the consent of the minister before fueling the reactor, keeping records, and testing equipment and materials.
Trouble ahead
The Magnox reactors served Britain well. Wylfa, the last Magnox standing, only came offline in 2015. Reconstructing Magnox lifecycle costs is tricky. But they appear to have been cheap. One reconstruction, which factored in numbers from early Magnox decommissioning, placed the lifecycle costs of electricity from Magnox reactors at eight pence per kilowatt hour in modern money. This would be 5.3 pence per kilowatt hour cheaper than electricity from Hinkley Point C.
The design undoubtedly had its flaws. It relied on natural uranium, which is good for producing weapons-grade plutonium, but not for power generation. Britain initially had little choice but to go down this route as it lacked domestic enrichment capabilities, and the US refused to export enriched uranium. At the time, enrichment was difficult, and involved converting uranium into a gas and forcing it through thousands of porous metal barriers. By the time Britain’s first specialist enrichment facility kicked into gear in the mid-1950s, the country had already settled on a design for civil reactors.
The usable fuel content in natural uranium is quickly burned up, forcing frequent refuelling. The fuel’s Magnox coating corroded rapidly, losing strength when temperatures in the reactor rose above 400 degrees Celsius. These limits meant the reactors had to run cooler, producing lower pressure steam and wasting more of the heat generated by the fuel. As a result, the Magnox reactors generated electricity about one third less efficiently than American light water reactors.
By the early 1960s, the government was searching for a new design. An intense bureaucratic battle played out behind the scenes. The Central Electricity Generating Board was predisposed towards an inexpensive American light water reactor design, as the US had resumed nuclear cooperation with the UK a few years earlier. Meanwhile, the Atomic Energy Authority feared that a switch away from gas-cooled, graphite designs would deprive them of valuable R&D and design work.
Instead, they championed the Advanced Gas-cooled Reactor (AGR), an upgraded version of the Magnox, which the Atomic Energy Authority had prototyped a decade earlier. The AGR used enriched instead of natural uranium, and each fuel rod was sealed in stainless steel, which could handle much higher temperatures. This allowed the plant to run at twice the temperature of the Magnox and, theoretically, higher levels of thermal efficiency than a light water reactor. This was scientifically elegant, but untested at scale. Choosing the design would also drive orders towards British firms.
In 1965, the Ministry of Power decided on the tender for Dungeness B, the second reactor to be built on the Dungeness headland in the south of Kent. In the end, through a combination of sharp bureaucratic elbow work and low skulduggery, it went ahead with a twin-reactor AGR station.
The original sin of British nuclear?
Amid stiff competition, Dungeness B is arguably the worst British infrastructure project of the modern era.
Atomic Power Constructions Limited, which won the tender to build Dungeness B, was a borderline moribund company. It had submitted a token bid with no expectation of winning, as a way of signaling to the Central Electricity Generating Board that the company was still alive and hopeful of future work. The Atomic Energy Authority covertly helped the company produce its supporting technical documentation. It underbid significantly on price and committed to an ambitious four-year schedule, promising to beat Magnox timelines while building a significantly larger and more complex first-of-a-kind design.
As one former employee observed later: ‘Nobody was more surprised than the staff of APC when their offer for an AGR secured acceptance.’ Major technical problems and shoddy work blighted almost every element of the plant. Engineers had to redesign the boilers after they couldn’t fit them into the available space. A key supplier of gas circulators went bankrupt, which forced the project team to scramble for replacements. On site, workers welded the steel poorly and installed the wrong materials in pipe and boiler supports, while the Nuclear Installations Inspectorate ordered extensive redesigns. Atomic Power Constructions went bust four years into the project and other companies had to step in and rescue the project.
Even so, work had already started on further AGR stations at Hinkley Point B, Hunterston, Hartlepool, and Heysham, split across different consortia. Having different companies build different reactors to a similar specification had worked well enough for the small and comparatively straightforward Magnox units, but it underperformed for the novel and more complex AGR. Different teams ran into similar engineering issues with the reactor and boiler and invested time resolving them independently. All ran behind schedule.
By 1974, nine years after the start of the program, Britain still had no operational AGRs, while costs had escalated by an average of 50 percent. The Central Electricity Generating Board concluded that ‘no further orders for AGRs can be contemplated in the near future’. For three years, officials contemplated another design switch.
By the end of the decade, even the AGR’s most ardent bureaucratic defenders had given up the ghost and advised dropping the design in favor of the American pressurized water reactor (PWR). But Tony Benn, then Secretary of State for Energy, ordered another two twin-unit AGR stations in 1978. In what reads like a parody of industrial strategy, Benn justified the move on the grounds that it would preserve the jobs of turbine and boiler makers.
The first reactor at Dungeness came online in 1983, 13 years behind schedule, four times over budget, and operating significantly below capacity.
In many accounts, the selection of the AGR was the original sin of British nuclear: a bad technology choice that tainted everything that followed. By the mid-1970s, a steady chorus of voices was already labeling the program a costly mistake and drawing unflattering parallels with Concorde, another high-profile, state-led, technically ambitious program that was insulated from market forces. There is clearly some merit to this view.
But simply blaming the AGR for the failure of UK nuclear seems unsatisfactory. While the early builds were disastrous, the last two stations that Benn ordered, Heysham 2 and Torness, were completed in eight years. Heysham was constructed within five percent of its original cost estimate. In 1973, Britain abandoned the consortium model, merging the different construction companies into the National Nuclear Corporation. This improved coordination and meant that, unlike earlier AGR builds, these latter stations shared a common design.
In short, the AGR wasn’t perfect, but by itself, the technology choice didn’t have to be fatal. And it is striking how little impact the shortcomings of the AGR program had on either public or elite opinion about nuclear power. As late as 1978, when the Dungeness B saga was widely known, 57 percent of the British public and every major political party continued to support nuclear power. The debate was confined to the technocrats, who were focused on design choice as opposed to the merits of pursuing a program at all.
The tipping point
Ironically, British nuclear’s downfall really came after the government decided to back the right technology. When Margaret Thatcher came to power in 1979, her government saw nuclear power as a vehicle for breaking the power of the coal mining unions that had held successive governments to ransom.
Sharing none of her predecessors’ desire to preserve technological independence from America, and keen to adopt a standardized design, the government embraced the PWR. While the PWR was less fuel efficient than the AGR on paper, it was simpler. By using the water for both cooling and moderation, the PWR combines both functions into one sealed, high-pressure circuit, avoiding the AGR’s separate graphite core and gas coolant system. The Central Electricity Generating Board agreed, arguing that the PWR had ‘substantially lower capital costs per unit of electricity than AGR’ and was ‘proven mainstream worldwide technology with some 150 reactors in operation worldwide, compared to 5 AGR stations in the UK’.
Unfortunately, in 1979, a PWR plant was involved in what was then the worst ever civilian nuclear accident. The meltdown at the Three Mile Island Generating Station in Pennsylvania, despite causing no deaths or illnesses, catalyzed opposition to nuclear power around the world. A plurality of the British public remained supportive, but only narrowly so. The old approach of debating nuclear policy among technocrats with little public input appeared to be reaching its end.
To allay these concerns, the government decided to pursue a policy of maximum transparency. Announcing the government’s planned PWR program in December 1979, the Secretary of State for Energy David Howell spoke of ‘the Government’s determination to ensure that full information is in the hands of Parliament and of the public before and as we make decisions to expand our nuclear capacity’. They decided to subject Sizewell B, the first PWR, to a full-scale inquiry, intending it to reassure the public about its safety.
Sir Frank Layfield, a distinguished planning lawyer, was appointed to lead the inquiry, which started work in 1981. The government gave Sir Frank unusually broad terms of reference. As well as the impact of the development on the local environment and amenities, the inquiry would cover the safety features of the design itself (something normally confined to the licensing process) as well as the need for a power station at all and the government’s long-term energy policy.
The resulting inquiry sat for 340 days. Instead of reassuring the public about the safety of Sizewell’s design, the inquiry descended into a public debate about government energy policy in general. Weeks were dedicated to military uses of nuclear technology, the ethics of importing uranium from Namibia (then occupied by apartheid South Africa), the impact of uranium mining on Australian aboriginal land rights, and other countries’ experimental reactor designs.
The technocrats, unprepared for this degree of scrutiny, acquitted themselves poorly in the public gaze. The Central Electricity Generating Board had not completed crucial safety documentation before the start of the inquiry and repeatedly filed addenda. The Nuclear Installations Inspectorate gave the impression of being subordinate and entirely reactive to the Board.
The inquiry was also dissatisfied with the quality of the evidence the Board supplied on construction timelines and the economics of nuclear power versus fossil fuels, dubbing it ‘insufficient for a proper understanding of either the approach or the assumptions adopted’.
Overreach
This might not have mattered too much if it had been the sole example of bureaucratic arrogance. During the 1970s, against a backdrop of growing concern about the environment, British Nuclear Fuels Limited began negotiating contracts to reprocess spent nuclear fuel from Japan at Sellafield, the nuclear site in the north of England that hosted Calder Hall and the Windscale Piles. While the plan had openly been discussed and written about in the nuclear trade press, it largely escaped public and political attention, until a 1975 Daily Mirror front page warned of the ‘Plan to Make Britain World’s Nuclear Dustbin’.
The ensuing public outcry prompted the Royal Commission on Environmental Pollution to study the question of nuclear’s impact on the environment. Its 1976 report recommended that: ‘There should be no commitment to a large programme of nuclear fission power until it has been demonstrated beyond reasonable doubt that a method exists to ensure the safe containment of longlived, highly radioactive waste for the indefinite future’.
Amid this outcry, industry was largely silent. In the eyes of both the officials and engineers, nuclear waste wasn’t a real problem, so they refused to take the public debate seriously. They assumed that nuclear waste would be recycled and used to fuel fast breeder reactors, which would convert the uranium in spent fuel into new plutonium for energy generation. As a result, waste was left to pile up in tanks on the basis that it would be resolved later.
The fast breeder reactor proved uneconomical and the British program never progressed beyond a small demonstrator plant. It also had another undesirable side effect. As the fast breeder reactor produced plutonium as a byproduct, it was seized on by opponents of nuclear power as proof that the civilian program was supporting military efforts. Frank Barnaby, a former nuclear scientist at the Atomic Energy Authority, had written Man and the Atom, a passionately pro-nuclear book in 1971. The book went as far as advocating nuclear explosions to dig canals. In 1975, after the fast breeder reactor program was unveiled, he published a new edition of the book. While much of the text was reproduced verbatim, he drew much more pessimistic conclusions and advocated for non-proliferation.
Disposal facilities in the UK met significant backlash in more or less every community where they were proposed, while the Sellafield site became a focal point for protests. In 1978, a year after its formation, Greenpeace began to publicize the dumping of nuclear waste at sea by the Atomic Energy Authority. After five years of embarrassing publicity led to the threat of a boycott by ship and transport workers, the government stopped the practice. The failure to anticipate the public outcry is arguably all the more confusing, given that nuclear waste had sparked outbreaks of local resistance as far back as the 1950s. In 1954, two years before Calder Hall opened, the Atomic Energy Authority had proposed to dump waste in the Forest of Dean, only to be defeated by local residents invoking obscure thirteenth-century land rights.
This persistent failure may have been a product of the industry’s culture. The founders of the Atomic Energy Authority and godfathers of the British nuclear industry were figures like Christopher Hinton, John Cockcroft, William Penney, and Edwin Plowden. They were products of World War II nuclear research, where they enjoyed sweeping authority to pursue a high-level mission with little interference or need for consultation. This mindset suffused the industry they helped to build.
In their defence, the technocrats were struggling to keep up beyond nuclear. Public opposition to megaprojects reached boiling point during the 1970s.
Between 1965 and 1973, the newly created Greater London Council embarked on a plan to build four motorways, stretching for 478 miles, through many of London’s most prized historic neighborhoods. Before the Ringways scheme, the planners almost always got their own way. Opposition to infrastructure of any kind had historically been localized, never widening into a broader movement. With the Ringways, the planners tried to fight too many people at once, lost, and demonstrated that public opposition to megaprojects could work. Similar revolts against high modernist planning were unfolding internationally at the same time. The most famous example was in New York, where Robert Moses’s highway and urban renewal schemes faced neighborhood campaigns led by figures like Jane Jacobs.
Sometimes the technocrats could open the door for the opposition themselves. In 1971, a commission planning a third airport for London selected the village of Cublington in Buckinghamshire. One member of the commission objected on environmental grounds, legitimizing popular opposition. After a political firestorm, the airport was moved to a more expensive site in the sandbanks off the Essex coast, in the Thames Estuary, eventually rendering it economically unviable. The project was canceled in 1974, and various attempts to revive it, most recently in 2016, have flamed out.
Another chink in the armor was visible with the new towns program. The government bulldozed through the opposition of local residents and even court injunctions in the 1940s to designate new towns like Stevenage, Cumbernauld, and Peterlee. These were idealistically planned to relieve overcrowding in bombed-out cities like London and Glasgow. Architecturally, the experimental designs often failed, producing alienating concrete jungles. The last new town project to be pursued to full completion was Milton Keynes in 1967.
The chill
While the seeds of the backlash were being sown, nuclear licensing was becoming more conservative. In 1974, the government had consolidated workplace health and safety regulation into a single unified body, including sectors like nuclear power. The new Health and Safety Executive, which the Nuclear Installations Inspectorate joined, mandated that risk in the workplace be reduced ‘as low as reasonably practicable’ (ALARP).
Crucially, the Health and Safety Executive was a nondepartmental body. The Ministry of Power had the clear objective of building new power stations. It wasn’t in the minister’s interests to rush approvals and build unsafe stations, but nor was it in their interests to endlessly postpone. The same dynamic is visible elsewhere: France, which undertook a vast nuclear construction program in the 1970s and 1980s, did not have an independent nuclear regulator until 2006, while in South Korea, which builds the developed world’s cheapest reactors, the prime minister’s office is heavily involved in nuclear regulation. When Britain made the licensing process independent, this link was broken. There was no longer an actor in the regulatory process incentivized to say yes.
The influence of ALARP is clear on Sizewell B. While theoretically based on an existing US design, the Central Electricity Generating Board added an extra containment dome, two additional backup systems to complement the two already in the design, and built in greater earthquake protection. This theoretically enhanced safety, but at a high cost. New guidelines, published in 1992, stated that safety measures must be taken until the cost is ‘grossly disproportionate’ to the risk averted. This meant that, depending on the risk, measures could be between two and ten times more expensive than the equivalent harm averted.
The combination of a less deferential population and a system of safety-focused but cost-indifferent regulators increasingly collided with an undiminished bureaucratic appetite for megaprojects. By ignoring concerns about waste, the nuclear industry alienated the public. By being seen to fudge the figures on nuclear economics and attempt to strongarm regulators, it offended politicians. Amid mounting skepticism, the industry was fast running out of friends, just when it began to really need them.
The premature death of UK nuclear?
The construction of Sizewell B began in 1988 and was completed in 1995, making it the last new reactor to be completed in the UK. The project cost approximately £6 billion in 2025 money for a 1.2-gigawatt reactor, making it roughly three times cheaper than Hinkley Point C per gigawatt generated. It was built in roughly the time it took to build a Magnox reactor, despite generating three times as much electricity and facing much stricter safety requirements. Despite this apparent success, the writing was on the wall for the industry.
The government scrapped plans for three more PWRs, keeping only the plans for Hinkley Point C. The Central Electricity Generating Board applied for permission to start work on the project in 1987, triggering a 14-month inquiry that heard from 600 witnesses and navigated 22,000 objections. The final report, completed in 1990, recommended approval, but suggested a final decision be postponed until a full government review of nuclear policy had taken place.
In 1995, the government formally announced that it would no longer fund the construction of nuclear power and that there is ‘no case for Government intervention to distort the electricity market by providing finance or guarantees for one form of generation over another’. It acknowledged that ‘private finance is unlikely to be available at present for new nuclear construction’.
This lack of enthusiasm stemmed from the industry’s most public defeat. In 1987, the Conservatives had won reelection on a platform that included electricity privatization. As investors began to scrutinize the operating record and finances of the AGRs, they came away underwhelmed.
More importantly, they didn’t want to take on responsibility for the cost of decommissioning the stations once they had reached the end of their lives. Decommissioning, which is regulated exceptionally conservatively in Britain, involves removing spent fuel from the reactor and cooling system, dismantling the plant, and decontaminating the site. The Central Electricity Generating Board had valued the decommissioning liabilities at £10.4 billion (in 2025 prices), but when auditors came to scrutinize the numbers, their estimate ballooned to between £22 and £36 billion. Meanwhile, without life extension work, the continued operation of the Magnoxes would generate less in revenue than their decommissioning liabilities.
The government concluded that the Central Electricity Generating Board had been deliberately burying the true cost of the nuclear program for years, subsidizing it with the revenue from coal and oil power stations. In the autumn of 1989, on the eve of privatization and having failed to find private sector buyers, the government decided to keep nuclear in public ownership.
Nationalization with Thatcherite characteristics
This exclusion from privatization was a public humiliation for the UK’s nuclear industry, but also marked the beginning of a turnaround. With the Central Electricity Generating Board dismantled by privatization, the government shifted the AGRs and Sizewell B into two state-owned companies: Nuclear Electric and Scottish Nuclear Limited.
The sheer scale of the UK’s nationalized system had made it easy to bury losses. Individual stations were not profit centers, so their management had little incentive to drive efficiency improvements. Meanwhile, status lay with engineers and scientists, rather than the managers of individual power stations: the manager of a facility like Torness could not authorize payments above £14,000 in today’s money without consulting Whitehall. The Central Electricity Generating Board, which was keen to buy the PWR, was uninterested in improving the economics of the AGR and thus undermined their own business case.
With nowhere to hide, and the old technocrats replaced with professional managers, performance improved. Managers studied higher-performing American and European reactors and embedded better preventive maintenance. Nuclear Electric ran a quality improvement process that mapped every gap between theoretical maximum AGR output and the actual number of megawatts generated. Engineers who identified cost savings or output improvements received bonuses.
Between 1990 and 1996, the English AGR fleet went from being operational 47 percent of the time to 76 percent of the time. This was the equivalent of adding two new power stations. In 1990–91, Nuclear Electric’s operating loss stood at £1.1 billion. By 1995–96, before Sizewell B had started commercial operation, the AGR fleet was profitable for the first time.
In 1996, the government merged Nuclear Electric and Scottish Nuclear Limited into British Energy and privatized it through a stock market flotation. Over the next few years, it paid steady dividends, cut debt, and was crowned ‘the darling of the utilities’ by the Financial Times. IPO investors were sitting on 355 percent gains by the start of 1999. It even began to expand into the US. But hubris struck. Under pressure from the City, the board handed back almost £1 billion to shareholders in 1999 through dividends and buybacks, stripping away the firm’s financial buffer.
British Energy was then hit by a turn in the market. Privatization helped enable the ‘dash for gas’ in the UK as private companies rapidly built cheap gas generators; gas-generated electricity jumped from 1 gigawatt in 1990 to 23 gigawatts in 2000. Initially, the price was set by the most expensive generator needed to meet the demand in a centralized mechanism called the Pool, which slowed the impact of cheap gas-fired plants. But in 2001, the government abolished the Pool, leading generators to strike direct contracts with electricity suppliers and large customers, causing prices to collapse to their lowest levels since privatization.

British Energy was already locked into expensive fuel reprocessing and waste management contracts. Even with the improved efficiency, operating and maintaining an AGR was expensive. In September 2002, the government stepped in with a £410 million emergency loan to keep the reactors running. As part of a restructure, the government took on responsibility for decommissioning and waste costs, while shareholders were wiped out. In 2009, the government arranged a sale of British Energy to France’s state energy company EDF.
Unlearning nuclear
By the late 2000s, concerns about climate change and rising gas prices led the government to return to nuclear power as an option. EDF revived the idea of building a third nuclear power station at Hinkley Point. Instead of investing directly, the government committed to pay a fixed, inflation-protected price of £92.50 per megawatt-hour generated in 2012 money, or £133 in 2025 pounds. This was significantly higher than the wholesale price has ever been since the electricity market was created, except during the crisis caused by Russia’s invasion of Ukraine.
For Sizewell C, the next new nuclear project, the government abandoned this model on value for money grounds. Instead, it had to finance the cost of development upfront and take on 45 percent of the investment itself.
The process for agreeing financing built in significant extra time. The negotiations over Hinkley Point C’s financing took four years, while designing and agreeing the funding mechanism for Sizewell C, Britain’s next planned nuclear project, took six.
By this stage, Britain had the most conservative regulatory regime in the world for nuclear power, excluding the countries where the technology is banned. This system results in designs considered safe in other countries having to go through extensive UK-specific modifications.
For example, Chinese, French, and Finnish regulators judged that the design’s two separate digital instrumentation and control systems offered sufficient redundancy in the event of a software fault. The UK regulator, by contrast, required that an entirely new analog system be designed from scratch. The system took 12 years to develop, approve, and build, and required an additional 76 equipment cabinets in each of the power station’s two units, along with a redesign of a number of auxiliary buildings. The 7,000 changes to the design meant Hinkley Point C required 35 percent more steel and 25 percent more concrete as a result of UK-specific modifications.
Since Sizewell B, the UK’s planning and environmental laws have tightened significantly. During the golden age of British nuclear, the developer needed to secure a single approval to start building. By contrast, modern developers have to assemble reams of parallel consents from different public bodies, operating to their own rules and timetables. The same vetocracy that has slowed the construction of roads, airport runways and tunnels has clamped down on nuclear with a vengeance.
These rules would always be burdensome, but the financing model for nuclear does not incentivize cost control. For example, the model for Sizewell C means that the cost of overruns is ultimately borne by a mixture of the billpayer and the taxpayer, not developers. This also means that developers have little incentive to challenge overly cautious rulings from regulators. There is a formal appeals process against nuclear site licensing decisions, which goes almost entirely unused.
The twilight of the technocrats
When Queen Elizabeth II flicked the switch at Calder Hall, an electronic dial immediately lit up showing the number of kilowatts the station was generating. Years later, one of the engineers discovered that this was powered by a man turning a handle. Seemingly impressive, but opaque and held together by improvised solutions, this set the tone for the British nuclear industry.
Every successful nuclear buildout, whether it is the Magnox reactors, France in the 1980s, or China today, shares a number of ingredients in common. They all had unambiguous political backing, regulators incentivized to approve promptly, predictable demand, and a supportive or indifferent public. Between 1965 and 1995, every one of those pillars crumbled in Britain. The scientists and engineers who ran the program, and the political class that provided such weak oversight, were at times the victims of bad luck but largely had themselves to blame. Following the revival of interest in nuclear power in the 2000s, the government, with the buy-in of the private sector, tacitly accepted that it would always be uneconomical, giving rise to a self-fulfilling prophecy.
At the start of 2025, the UK government commissioned an independent taskforce to provide recommendations on how to reverse rising costs and lengthening timelines. The taskforce’s report, which the government accepted in full, concluded that the sector is blighted by ‘a culture of resignation among dutyholders and regulators who simply accept that nuclear projects must be slow and expensive’. Adopting the taskforce’s more radical proposals would go some way to restoring the ingredients for nuclear success in Britain.
