October 5, 2025

The researchers made a social media platform where each user was AI. The bots found themselves at war

0
ce8c1f48ada8f5fa2672ae1c66b92875.jpg


Social platforms like Facebook and X exacerbate the problem of political and social polarization, but they do not create it. A recent study by researchers from the University of Amsterdam in the Netherlands placed AI chatbots in a simple social media structure to see how they interacted with each other and found that, even without the invisible hand of the algorithm, they tend to organize according to their pre-attributed affiliations and the auto-effect in echo chambers.

The study, of which a pre-printed was recently published on Arxiv, took 500 Chatbots AI powered by the GPT-4O large Language of Openai, and prescribed specific characters. Then, they were unleashed on a simple social media platform which had no advertisements and no algorithm offering a discovery of content or recommended publications served in the flow of a user. These chatbots were responsible for interacting with each other and the content available on the platform. During five different experiences, which all involved chatbots engaging 10,000 shares, robots tended to follow other users who shared their own political convictions. He also found that users who published the most partisan content tended to obtain the most followers and republication.

The results do not talk exactly about us, since chatbots were intended to reproduce how humans interact. Of course, none of this is really independent of the influence of algorithm. The bots have been trained on human interaction which has been defined by decades now by the way we comply online in a world dominated by algorithm. They emulate the already brained versions of poison of ourselves, and it is not clear how we see them again.

To combat self-selected polarization, the researchers have tried a handful of solutions, in particular by offering a chronological flow, devaluing viral content, hiding a follower and republishing figures, hiding user profiles and amplifying opposite views. (The latter, the researchers succeeded in a previous study, which succeeded in creating a high commitment and a low toxicity in a simulated social platform.) In the simulation which hid the organic of users, the partisan division has in fact won, and extreme messages are even more careful.

It seems that social media as a structure can simply be untenable for humans to navigate without strengthening our worst instincts and behaviors. Social media is a fun house mirror for humanity; This reflects us, but in the most distorted way. It is not clear that there are lenses strong enough to correct the way we see each other online.


https://gizmodo.com/app/uploads/2022/11/ce8c1f48ada8f5fa2672ae1c66b92875.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *