Containment Before the Fallout: What AI Can Learn from Nuclear Safety
We used to sell radium in toothpaste. Let’s not make the same mistake with algorithms.
Reminder: this Substack is opinionated, not official. I write to provoke thought and spark conversation—not to issue policy memos. Interpret accordingly. Comment below.
When I tell people I work in nuclear science and engineering, they usually ask if I’m worried about the apocalypse.
I tell them no. Because we plan for possible failure.
We model reactor failures down to the one-in-a-million chance. We build multiple layers of containment. We drill for the worst-case scenario. We have entire federal agencies whose only job is to ensure that we design nuclear systems to be safe even in the most improbable cases.
Artificial intelligence, by contrast, is being deployed into financial systems, electric grids, weapons command networks, and medical diagnostics without even a functioning off-switch.
We’re watching the rise of a transformative technology with world-altering implications. But unlike nuclear, there’s no Nuclear Regulatory Commission to find failure modes. No IAEA to check for bad actors. No real safeguards. Just glossy press releases, breathless hype, and the occasional “pause letter” begging the AI companies to self-regulate.
This isn’t safety.
From Fission to Foundation Models: A Tale of Two Existential Risks
Nuclear technology changed the trajectory of global power in the 20th century, and is poised to do that again this century. It forced the creation of international treaties, export controls, and entire institutions designed to prevent misuse, accidents, and geopolitical escalation.
It also killed people. Chernobyl was not science fiction. That reactor accident was a failure of governance, engineering, and human systems. So we learned. Slowly, painfully, and thoroughly. We also started to overregulate to the point of stagnation.
Today, building a nuclear reactor in the U.S. requires licensing, multi-phase safety review, public input, and continual oversight. It’s expensive and frustrating, but that’s the price of managing power at that scale.
Now compare that to AI. If you want to train a system that can break encryption, generate fake science, manipulate financial markets, or flood the internet with synthetic disinformation? You don’t need a license. You need a credit card and a GPU cluster.
The Real Danger Isn’t Skynet—It’s Unchecked Systems
Let’s put aside the sci-fi narrative of evil robots taking over. The real risk from AI is boring: systems that are poorly aligned with reality, deployed at scale, and too complex for humans to supervise. How many articles have you seen lately that show practically no understanding of the internal kitchen of these LLMs?
It’s not that AI is too smart. It’s that it’s just smart enough to do damage while still being dumb enough to make mistakes we can’t anticipate. And it’s being integrated into everything from logistics to defense to energy markets.
Sound familiar? In nuclear, we call this normal accident theory: the idea that in tightly coupled, high-complexity systems, failures aren’t just possible—they’re inevitable.
The difference is that in nuclear, we’ve developed tools to model and mitigate that risk: probabilistic risk assessment, defense-in-depth architectures, redundancy, containment, and operational governance.
In AI? Most developers are still debating whether it's their job to think about failure modes at all.
What AI Can Learn from Nuclear. Because We’ve Done This Before
In the last post, we took a glowing tour through the “Radiation Tech Hall of Fame”—a parade of nuclear-era products, some so reckless you’d think they were made up. Radioactive dinnerware. Plutonium pacemakers. X-ray machines in shoe stores. Every single one was sold as “the future.”
Classify the risk
Nuclear technology is classified by enrichment level, material type, and potential for weaponization. AI needs a taxonomy of model capability and deployment sensitivity.
Nuclear Past: Think about Radithor, radium-laced water sold as a health tonic. There was no classification, no controlled distribution. It was “just another product.”
By the time Eben Byers’ jaw fell off, we realized too late: some tech isn’t for casual use.
For AI: We need a taxonomy of models and risk tiers based on capabilities, compute, access, and downstream integration. Not all models are equal. Not all risks are abstract.
License the high-consequence stuff
We don’t let just anyone operate a nuclear reactor. Why are we letting just anyone spin up foundation models that can synthesize bioweapons, crack passwords, or influence elections?
Nuclear Past: The Gilbert U-238 Atomic Energy Lab was literally a home toy kit with uranium ore. Marketed to kids. With cloud chambers and Geiger counters. Cool? Yes. Responsible? No.
Eventually, adults intervened.
For AI: If your model can synthesize new drugs, write code that bypasses firewalls, or manipulate public opinion, you should need a license. And if you train the model? You should be legally accountable for where and how it’s deployed.
Create an AI containment framework
In nuclear, if a core melts down, you’ve got engineered containment. In AI, what happens when a model exploits its environment or begins optimizing in unsafe ways? Who's in the loop?
Nuclear Past: Shoe-fitting X-ray machines were never contained. Kids were blasted with radiation in public spaces. No shielding. No control. The machines looked like fun arcade cabinets. Good old days. Today, I cannot operate our small x-ray source if the door interlocks are down. We take radiation rather seriously in our labs, it’s contained away from humans or sensitive equipment.
For AI: We need containment principles:
Sandboxed training environments
Access controls based on risk tier
Emergency shutdown protocols
Air gaps for high-risk deployments
Build independent oversight with teeth
The NRC, for all its bureaucratic headaches, is independent, accountable, and technically competent (at least for now…I mean independent). AI governance today is mostly marketing, think tank reports, and voluntary pledges.
Nuclear Past: The Radium Girls exposed one of the worst failures of worker safety in American history. What changed things wasn’t industry ethics—it was lawsuits, public outcry, and new regulations.
For AI: We need an NRC/IAEA equivalent. A neutral body with:
Authority to halt deployment
Technical rigor to evaluate safety
Independence from profit motives
International coordination, like the IAEA
The lesson from the radiation age isn’t just “we made mistakes.”
It’s that we learned to institutionalize caution without killing innovation.
Let’s not wait for the AI version of the Radium Girls, the Radithor scandal, or the glowing dinner plate on your desk. Self-warming coffee mugs are ok.
We need containment before the fallout.
If you're building powerful AI, demand classification and licensing.
If you're funding it, stop pretending frontier risk is “someone else’s job.”
If you're in government, look at the NRC playbook and ask why AI has no equivalent. Ok, maybe something more efficient than 5-year review horizon.
And if you're part of a technical field—especially nuclear, aerospace, biosecurity—get in the room. We’ve done this before.
PS. What’s Out There Already?
Before we finish, I wanted to list a few initiatives and organizations for completeness. Some are working towards the safety guidelines, risk management and auditing processes.
NIST AI Safety Institute Consortium (AISIC): Over 280 organizations collaborating on guidelines and measurement standards for trustworthy AI deployment
EU AI Act + Voluntary Code of Practice: A legally binding high-risk framework with transparency, copyright, safety, and enforcement features. This was unveiled yesterday (July 10th)!
California Frontier Policy Report: Advocates third-party evaluation, whistleblower protections, and risk-tiered regulation. Released last month, I think this is an important step forward regarding transparency, evaluation, and incident tracking. More of a framework level focused on reporting and transparency, not engineered safeguards. Also, emphasizes voluntary compliance—we need an independent body with the authority to halt deployment when the project is outside of safety zone.
International Network of AI Safety Institutes (AISI): Formed after the Seoul AI Summit; currently organizing objectives and deliverables. Scroll through their objectives—is this the beginning of the “safety culture”? (Quotes are for nuclear engineers to get them thinking of ALARA-equivalent in AI.)
What are your thoughts on how we apply nuclear-grade safety thinking to AI? Leave a comment below.
Nuclear is an excellent analogy, as AI could change our lives and the world order as much as the nuclear age has!