October 6, 2025

IA experts urgently call on governments to think of perhaps doing something

0
GettyImages-2154701385.jpg


Everyone seems to recognize the fact that artificial intelligence is a rapid and emerging development technology that has the immense damage potential if it is made without guarantees, but fundamentally no one (with the exception of the European Union, in a way) cannot be understood to regulate it. So, instead of trying to set up a clear and narrow path to know how we allow AI to operate, experts in the field have opted for a new approach: how about understanding what extreme examples we all think are bad and simply accept this?

On Monday, a group of politicians, scientists and academics went to the United Nations General Assembly to announce the World Call for AI red lines, a plea for the governments of the world to meet and get along the widest of railing to prevent “universally unacceptable risks” which could result from the deployment of AI. The group’s objective is to establish these red lines by the end of 2026.

The proposal has raised more than 200 signatures so far experts in industry, political leaders and Nobel Prize winners. The former president of Ireland, Mary Robinson, and the former president of Colombia, Juan Manuel Santos, are on board, just like several winners of the Nobel Prize. Geoffrey Hinton and Yoshua Bengio, two of the three men commonly known as “AI sponsors” because of their fundamental work in space, also added their name to the list.

Now what are these red lines? Well, it’s always up to governments to decide. The call does not include prescriptions or recommendations for specific policies, although it calls some examples of what could be a red line. The ban on the launch of nuclear weapons or use in mass monitoring efforts would be a potential red line for AI uses, the so -called group, while prohibiting the creation of AI which cannot be finished by human replacement would be a possible red line for the behavior of AI. But they are very clear: do not define them in stone, these are only examples, you can make your own rules.

The only thing the group offers concretely is that any global agreement should be built on three pillars: “A clear list of prohibitions; robust and verifiable verification mechanisms; and the appointment of an independent organization established by the parties to supervise the implementation. ”

The details, however, are for governments to accept. And it’s a bit of the difficult part. The call recommends that countries host certain summits and working groups to understand all of this, but there are surely many competing reasons at stake in these conversations.

The United States, for example, has already undertaken not to authorize AI to control nuclear weapons (an agreement concluded under the Biden administration, so Lord knows if this is still at stake). But recent reports have indicated that some parts of the Trump administration intelligence community have already been bored by the fact that certain AI companies do not allow them to use their tools for domestic surveillance efforts. So, would America go up for such a proposal? Maybe we will know it by the end of 2026 … if we do it so long.


https://gizmodo.com/app/uploads/2024/11/GettyImages-2154701385.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *