October 5, 2025

The sponsor warns humanity risks extinction by hyperintelligent machines with their own “conservation objectives” within 10 years

0
GettyImages-1692966294-e1759334717901.jpg



The so-called “AI godfather”, Yoshua Bengio, says that the technological companies that run for the domination of AI could bring us closer to our own extinction thanks to the creation of machines with “preservation objectives”.

Bengio, professor at the University of Montreal, known for his fundamental work linked to in -depth learning, warned threats posed by a hyperintelligent AI for years, but the rapid pace of development continued despite its warnings. Over the past six months, Openai, Anthropic, Elon Musk XAI and Google Gemini have all published new models or upgrades when they are trying to win the AI ​​race. The CEO of OpenAI, Sam Altman, even predicted that AI would go beyond human intelligence by the end of the decade, while other leaders of technology said that this day could happen even earlier.

However, Bengio claims that this rapid development is a potential threat.

“If we build machines that are much smarter than we have and have their own preservation objectives, it is dangerous. It is like creating a competitor of humanity that is smarter than us,” said Bengio to the Wall Street Journal.

Because they are trained on human language and behavior, these advanced models could potentially persuade and even manipulate humans to achieve their objectives. However, the objectives of AI models may not always align with human objectives, said Bengio.

“Recent experiences show that in certain circumstances where AI has no choice but between its preservation, which means the objectives it has been given, and make something that causes the death of a human, they could choose the death of the human to preserve their objectives,” he said.

Call for AI security

Several examples in recent years show that AI can convince humans to believe unrestricted, even those who have no history of mental illness. On the other hand, some evidence exist that AI can also be convinced, using persuasion techniques for humans, to give answers that it would generally be forbidden to give.

For Bengio, all of this is added to more evidence that independent third parties must more closely examine the security methodologies of AI companies. In June, Bengio also launched Lawzero for non -profit with $ 30 million in funding to create a safe “non -agentic” AI that can help ensure the safety of other systems created by large technological companies.

Otherwise, Bengio predicts that we could start to see the major risks of AI models in five to ten years, but he warned that humans should prepare in case these risks belong sooner than expected.

“The thing with catastrophic events such as extinction, and even less radical events which are still catastrophic as destroying our democracies, is that they are so bad that even if there was only one chance of 1% that it could happen, it is not acceptable,” he said.

Global Forum fortune returns on October 26 to 27, 2025 in Riyadh. CEOs and world leaders will meet for a dynamic event only invitation that shapes the future of business. Request an invitation.


https://fortune.com/img-assets/wp-content/uploads/2025/10/GettyImages-1692966294-e1759334717901.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *