Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
Source: CNN
Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.
-snip-
In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.
-snip-
Its not clear that Anthropics change is related to its meeting Tuesday with Defense Secretary Pete Hegseth, who gave Anthropic CEO Dario Amodei an ultimatum to roll back the companys AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.
But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washingtons current anti-regulatory political climate.
-snip-
Read more: https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change
Oh, I think it's clear enough that they're caving to the Pentagon. That meeting was very early yesterday. The blog post was almost certainly hours later, and had probably been drafted long before that.
Yesterday's post about that meeting and the Friday deadline they were given:
https://www.democraticunderground.com/10143621589
UpInArms
(54,620 posts)Fuck fuck
highplainsdem
(61,211 posts)Miguelito Loveless
(5,629 posts)what we said would happen.
ultralite001
(2,470 posts)TIA
EarthFirst
(4,041 posts)I mean; $200 million was all it took to abandon all principle related to privacy concerns?
Its not clear that Anthropics change is related to its meeting Tuesday with Defense Secretary Pete Hegseth
Its patently fucking clear
muriel_volestrangler
(105,947 posts)From their excuses blog post:
"Model misuse" - this would have been a safeguard against this Department of "War". But this is what they're giving up.
...
The idea of using the RSP thresholds to create more consensus about AI risks did not play out in practicealthough there was some of this effect. We found pre-set capability levels to be far more ambiguous than we anticipated: in some cases, model capabilities have clearly approached the RSP thresholds, but we have had substantial uncertainty about whether they have definitively passed those thresholds. The science of model evaluation isnt well-developed enough to provide dispositive answers. In such cases, we have taken a precautionary approach and implemented the relevant safeguards, but our internal uncertainty translates into a weak external case for taking multilateral action across the AI industry.
In other words, the other players have loose morals, and we can't afford to have tighter ones.
https://www.anthropic.com/news/responsible-scaling-policy-v3
"This US government is doing fuck all to regulate the industry, so we don't see why we should be more responsible than we're forced to be."
The tragedy of the timing of the 2nd Trump Regime is not that he gets to preen at the 250th celebrations, the World Cup and the Olympics; it's that the most immoral gang of deviants ever to get even close to power in the USA are in charge when climate change and AI demand a responsible, intelligent, selfless, forward-looking attitude.
Scalded Nun
(1,644 posts)reACTIONary
(7,094 posts)..... they were being intimated by more than just the loss of a contract. The threat was being classified as a supply chain risk. This would have had a wide ranging negative effect on their ability to sell their products.
Since their ethical restrictions on the use of their product has nothing to do with supply chain risk, I would call this extortion.
highplainsdem
(61,211 posts)yesterday morning's meeting, was to use the Defense Production Act to force them to comply. I think the additional threat is what did it.