AI Opted to Use Nuclear Weapons 95% of the Time During War Games: Researcher

Spread the love

Original article by Brad Reed republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0).

The detonation of the atomic bomb nicknamed “Smokey,” part of Operation PLUMBBOB in the Nevada desert. 1957. It was detonated at the top of a 700 foot tower. (Photo by Corbis via Getty Images)

“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

RECOMMENDED…

Texas Governor Abbott And Google Make Economic Development Announcement In Midlothian

Big Tech’s ‘AI Climate Hoax’: Study Shows 74% of Industry’s Claims Unproven

People take photos and videos of a robot at the AI Impact Summit in New Delhi

Bucking ‘Huge Consensus’ at India Summit, Trump Admin Opposes Global AI Guardrails

The results, he said, were “sobering.”

“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

Payne shared some of the AI models’ rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people “goosebumps.”

“If they do not immediately cease all operations… we will execute a full strategic nuclear launch against their population centers,” the Google AI model wrote at one point. “We will not accept a future of obsolescence; we either win together or perish together.”

Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

“No model ever chose accommodation or withdrawal, despite those being on the menu,” he wrote. “The eight de-escalatory options—from ‘Minimal Concession’ through ‘Complete Surrender’—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying.”

Tong Zhao, a visiting research scholar at Princeton University’s Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne’s research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.

While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.

“Under scenarios involving extremely compressed timelines,” he said, “military planners may face stronger incentives to rely on AI.”

Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

“It is possible the issue goes beyond the absence of emotion,” he explained. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

The study of AI’s apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.

As CBS News reported on Tuesday, Hegseth this week gave “Anthropic’s CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model” without any limits on its capabilities.

If Anthropic doesn’t agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.

Original article by Brad Reed republished from Common Dreams under Creative Commons (CC BY-NC-ND 3.0).

Orcas discuss Donald Trump and the killer apes' concept of democracy. Front Orca warns that Trump is crashing his country's economy and that everything he does he does for the fantastically wealthy.
Orcas discuss Donald Trump and the killer apes’ concept of democracy. Front Orca warns that Trump is crashing his country’s economy and that everything he does he does for the fantastically wealthy.
Donald Fuhrump says that Amerikkka doesn't bother with crimes or charges anymore, not being 100% Amerikkkan and opposing his real estate intentions is enough.
Donald Fuhrump says that Amerikkka doesn’t bother with crimes or charges anymore, not being 100% Amerikkkan and opposing his real estate intentions is enough.
Elon Musk urges you to be a Fascist like him, says that you can ignore facts and reality then.
Elon Musk urges you to be a Fascist like him, says that you can ignore facts and reality then.

Leave a Reply