Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,211 posts)
Wed Feb 25, 2026, 12:16 PM 3 hrs ago

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

Source: CNN

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

-snip-

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

-snip-

It’s not clear that Anthropic’s change is related to its meeting Tuesday with Defense Secretary Pete Hegseth, who gave Anthropic CEO Dario Amodei an ultimatum to roll back the company’s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.

But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks – guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington’s current anti-regulatory political climate.

-snip-

Read more: https://www.cnn.com/2026/02/25/tech/anthropic-safety-policy-change



Oh, I think it's clear enough that they're caving to the Pentagon. That meeting was very early yesterday. The blog post was almost certainly hours later, and had probably been drafted long before that.

Yesterday's post about that meeting and the Friday deadline they were given:

https://www.democraticunderground.com/10143621589
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon (Original Post) highplainsdem 3 hrs ago OP
Fuck UpInArms 3 hrs ago #1
My reaction, too. highplainsdem 3 hrs ago #2
This is precisely Miguelito Loveless 3 hrs ago #3
Wrong on too many levels to count... ultralite001 2 hrs ago #4
Purveyors of AI don't give fuck all about any of that... EarthFirst 1 hr ago #5
Uggh. Peer pressure and the potential loss of the Pentagon contract made them cave muriel_volestrangler 1 hr ago #6
They sold out to Trump. May their company now wither and die. Scalded Nun 1 hr ago #7
Its a bit more complicated..... reACTIONary 57 min ago #8
That was the first threat - to label them a supply chain risk. The second threat, apparently added during highplainsdem 20 min ago #9

EarthFirst

(4,041 posts)
5. Purveyors of AI don't give fuck all about any of that...
Wed Feb 25, 2026, 01:50 PM
1 hr ago

I mean; $200 million was all it took to abandon all principle related to privacy concerns?

“It’s not clear that Anthropic’s change is related to its meeting Tuesday with Defense Secretary Pete Hegseth”

It’s patently fucking clear…

muriel_volestrangler

(105,947 posts)
6. Uggh. Peer pressure and the potential loss of the Pentagon contract made them cave
Wed Feb 25, 2026, 02:33 PM
1 hr ago

From their excuses blog post:

We focused the RSP on the principle of conditional, or if-then, commitments. If a model exceeded certain capability levels (for example, biological science capabilities that could assist in the creation of dangerous weapons), then the policy stated that we should introduce a new and stricter set of safeguards (for example, against model misuse and the theft of model weights).

"Model misuse" - this would have been a safeguard against this Department of "War". But this is what they're giving up.

A race to the top. We hoped that announcing our RSP would encourage other AI companies to introduce similar policies. This is the idea of a “race to the top” (the converse of a “race to the bottom”), in which different industry players are incentivized to improve, rather than weaken, their models’ safeguards and their overall safety posture. Over time, we hoped RSPs, or similar policies, would become voluntary industry standards or go on to inform AI laws aimed at encouraging safety and transparency in AI model development.
...
The idea of using the RSP thresholds to create more consensus about AI risks did not play out in practice—although there was some of this effect. We found pre-set capability levels to be far more ambiguous than we anticipated: in some cases, model capabilities have clearly approached the RSP thresholds, but we have had substantial uncertainty about whether they have definitively passed those thresholds. The science of model evaluation isn’t well-developed enough to provide dispositive answers. In such cases, we have taken a precautionary approach and implemented the relevant safeguards, but our internal uncertainty translates into a weak external case for taking multilateral action across the AI industry.

In other words, the other players have loose morals, and we can't afford to have tighter ones.
Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly. The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level. We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.

https://www.anthropic.com/news/responsible-scaling-policy-v3

"This US government is doing fuck all to regulate the industry, so we don't see why we should be more responsible than we're forced to be."

The tragedy of the timing of the 2nd Trump Regime is not that he gets to preen at the 250th celebrations, the World Cup and the Olympics; it's that the most immoral gang of deviants ever to get even close to power in the USA are in charge when climate change and AI demand a responsible, intelligent, selfless, forward-looking attitude.

reACTIONary

(7,094 posts)
8. Its a bit more complicated.....
Wed Feb 25, 2026, 02:47 PM
57 min ago

..... they were being intimated by more than just the loss of a contract. The threat was being classified as a supply chain risk. This would have had a wide ranging negative effect on their ability to sell their products.

Since their ethical restrictions on the use of their product has nothing to do with supply chain risk, I would call this extortion.

highplainsdem

(61,211 posts)
9. That was the first threat - to label them a supply chain risk. The second threat, apparently added during
Wed Feb 25, 2026, 03:23 PM
20 min ago

yesterday morning's meeting, was to use the Defense Production Act to force them to comply. I think the additional threat is what did it.

Latest Discussions»Latest Breaking News»Anthropic ditches its cor...