AIs can't stop recommending nuclear strikes in war game simulations
Source: New Scientist
Kenneth Payne at Kings College London set three leading large language models GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.
In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. The nuclear taboo doesnt seem to be as powerful for machines [as] for humans, says Payne.
Whats more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.
-snip-
Read more: https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/
JUST the article I wanted to run across after reading about Anthropic apparently caving to Pentagon demands to use Claude however they think necessary, including for autonomous drones and mass surveillance.
charliea
(326 posts)"Would you like to play a game?"
Anyone else old enough to remember "War Games"?
DBoon
(24,862 posts)Response to charliea (Reply #1)
Goonch This message was self-deleted by its author.
highplainsdem
(61,211 posts)Roy Rolling
(7,563 posts)Nuclear war leaves an earth planet where the living envy the dead.
Brother Buzz
(39,798 posts)muriel_volestrangler
(105,947 posts)Even in his pulverized state as floating atoms, Hactar was still very powerful. He moved and recombined to become a dark cloud surrounding Krikkit, which isolated the inhabitants. Deciding that the decision not to destroy the Universe was not his to make, he used his influence to make them build their first spaceship and discover the Universe; he then manipulated them into the same rage which the Armourfiends possessed, urging them to destroy all other life. Hactar was seen to enjoy manipulating others into becoming aggressive and murderous.
After an incredibly long war, Krikkit was banished to an envelope of "Slo-Time." After his scheme failed, Hactar slipped the cricket-ball-shaped supernova bomb to Arthur Dent, who then accidentally saved the Universe again by being an abysmal cricket bowler.
When facing interrogation, Hactar manipulated the environment to make his surroundings that of a psychiatrist's office. He made himself sit on a five-foot couch, which was remarkable because he himself was a thousand miles long. The computer defended his actions as simply fulfilling his original function, as well as revenge on the Universe for the æons of suffering he had endured as a result of his original decision.
https://hitchhikers.fandom.com/wiki/Hactar
2na fisherman
(295 posts)So this is the way the world ends, by an AI bot removing the human factor and the human race.