October 6, 2025

“ Sponsor of the ia ” Geoffrey Hinton: short -term profits, and not AI, the end of AI is the Highness for Technological Companies

0
GettyImages-2188059709-e1755279323632.jpg



Elon Musk has a vision of Moshot’s life with AI: technology will take all our jobs, while a “universal high income” will mean that anyone can access a theoretical abundance of goods and services. Provided that Musk’s high dream could even become a reality, there would, of course, be a deep existential account.

“The question will be really that of meaning,” said Musk at the Vivatechnology conference in May 2024. “If a computer can do – and robots can make – everything you … does your life make sense?”

But most of the industry leaders do not ask this question on the end of the AI, according to the Nobel winner and the “sponsor of the AI” Geoffrey Hinton. With regard to the development of AI, Big Tech is less interested in the long -term consequences of technology – and more concerned with rapid results.

“For business owners, what stimulates research is short -term profits,” said Hinton, IT. Fortune.

And for the developers behind technology, Hinton said, the accent is also focused on work in front of them, and not on the final result of the research itself.

“Researchers are interested in solving problems that have their curiosity. Is it not as if we are starting with the same objective of, what will be the future of humanity? ” Said Hinton.

“We have these small goals, how would you do it?” Or, how should you make your computer capable of recognizing things in the images? How to make a computer capable of generating convincing videos? ” He added. “This is really what motivates research.”

Hinton has long warned of the dangers of AI without railing and intentional evolution, estimating 10% to 20% like that technology destroys humans after the development of superintendent.

In 2023-10 years after having sold his Dnnresearch neural network company in Google – Hinton left his role in the technology giant, wanting to speak freely about the dangers of technology and fearing the inability to prevent bad players from using it for bad things. “”

Hinton has a big image

For Hinton, the dangers of AI say to themselves in two categories: the risk that technology itself poses for the future of humanity and the consequences of AI manipulated by people with a bad intention.

“There is a great distinction between two different types of risks,” he said. “There is the risk that bad players use AI, and that is already there. This is already happening with things like false videos and cyber attacks, and can happen very soon with viruses. And it is very different from the risk of AA itself becoming a bad actor.”

Financial institutions like Ant International in Singapore, for example, sounded alarms on the proliferation of fakefakes increasing the threat of scams or fraud. Tianyi Zhang, Director General of Risk Management and Cybersecurity in Ant International, said Fortune The company noted that more than 70% of new registrations in certain markets were potential attempts at Foedfake.

“We have identified more than 150 types of deep attacks,” he said.

Beyond the defense of more regulations, the call for action of Hinton to fight against the potential of the essential for misdeeds is a steep battle because each problem with technology requires a discreet solution, he said. He envisages an authentication of type from videos and images in the future which would fight against the spread of Deepfakes.

Like the way in which printers added names to their works after the advent of the printing house years ago, media sources will also have to find a way to add their signatures to their authentic works. But Hinton said the fixes can only go so far.

“This problem can probably be resolved, but the solution to this problem does not solve other problems,” he said.

For the risk that Hinton poses himself, Hinton thinks that technological companies must fundamentally change the way they see their relationship with AI. When AI reaches superintelligence, he said, it will not only go beyond human capacities, but also the desire to survive and take additional control. The current framework around AI – that humans can control technology – will therefore no longer be relevant.

Hinton poses that AI models must be imbued with a “maternal instinct” so that he can treat less powerful humans with sympathy, rather than the desire to control them.

Invoking ideals of traditional femininity, he said that the only example he can cite a smarter being falling under the influence of a less intelligent baby is a baby who controls a mother.

“And so I think it is a better model that we could practice with an Superintellian AI,” said Hinton. “They will be mothers and we will be babies.”

Presentation of 2025 Global Fortune 500The final classification of the largest companies in the world. Explore this year’s list.


https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2188059709-e1755279323632.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *