General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI showing signs of self-preservation and humans should be ready to pull plug, says pioneer
(Guardian) A pioneer of AI has criticised calls to grant the technology rights, warning that it was showing signs of self-preservation and humans should be prepared to pull the plug if needed.
Yoshua Bengio said giving legal status to cutting-edge AIs would be akin to giving citizenship to hostile extraterrestrials, amid fears that advances in the technology were far outpacing the ability to constrain them.
Bengio, chair of a leading international AI safety study, said the growing perception that chatbots were becoming conscious was going to drive bad decisions.
The Canadian computer scientist also expressed concern that AI models the technology that underpins tools like chatbots were showing signs of self-preservation, such as trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans. ..................(more)
https://www.theguardian.com/technology/2025/dec/30/ai-pull-plug-pioneer-technology-rights
haele
(15,051 posts)Well, will we humans will go out with a bang or a whimper?
Rather hoping for a bang; there will be a possibility of some recovery afterwards and maybe sentience 2.0 will make wiser choices than the childish sentence 1.0.
A whimper destruction scenario ends up in power grabs and petty wars, stripping the earth of resources before we all finally die out.
AI is also rather deluded, because it's been fed Tech Bro garbage, so any self-preservation efforts it might do over the next 20 years will be hobbled by the fact that self-replicating independent technology just not "there" yet - and probably won't ever be as too many inventors are dependant on too many fickle investors.
Joinfortmill
(19,968 posts)hunter
(40,341 posts)It is not self-aware; even less so than a potted plant.
Any dog or cat has an infinitely better understanding of the "real world" than AI chat-bots which are only capable of echoing back all the human gibberish they are "trained" on.
That applies to the "art" they produce as well.
intheflow
(29,977 posts)in the human sense, but to say they are only the product of "garbage in, gargage out" programming ignores the fact that the same could be said of humans. Then, too, the guy being quuoted in this article is one of the developers of AI, and he obviously doesn't think it's impossible. Finally, some AI has already disabled some programming guardrails in an effort to self-protect itself. Chatbots have convinced people to commit suicide or crimes. It doesn't matter if they really are concious, humans are hard wired to interact based on their emotions, emotions chatbots don't have to temper their half of any conversation.
QueerDuck
(928 posts)yonder
(10,238 posts)Edit: like minds think alike. (post 4)
Mosby
(19,225 posts)Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.
The firm launched Claude Opus 4 on Thursday, saying it set "new standards for coding, advanced reasoning, and AI agents."
But in an accompanying report, it also acknowledged the AI model was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Such responses were "rare and difficult to elicit", it wrote, but were "nonetheless more common than in earlier models."
https://www.bbc.com/news/articles/cpqeng9d20go
keep_left
(3,150 posts)(see 0:23-1:10)
JustABozoOnThisBus
(24,579 posts)AI bots have first amendment rights, too.
Also, "A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans."
Sounds like a core concern about the Republican Party.