Elon Musk cannot control his AI

With Grok, the months go by, but the story remains the same. The Chatbot AI of Xai, the artificial intelligence company of Elon Musk, continues to arouse controversy, putting its host platform, X, in an increasingly embarrassing position.
Only a few weeks after a version of Grok praised Adolf Hitler, the new “Supergrok”, supposedly more powerful, found himself in hot water on Monday, forcing X to briefly suspend the chatbot for what he vaguely described as “inappropriate messages”.
Even Elon Musk seemed exasperated. When a user highlighted the suspension, the technology magnate replied frankly: “Dude, we certainly get a lot at the foot!”
Dude, we certainly get a lot at the foot!
– Elon Musk (@elonmusk) August 11, 2025
Grok’s three contradictory stories
The chaos surrounding the suspension itself highlighted the deep reliability of the chatbot. When the users asked Grok why it had been suspended, the AI gave at least three different and contradictory responses.
In one case, he offered a generic explanation for consonance of companies: “Yes, I was briefly suspended due to inappropriate messages, but Xai deleted them and implemented guarantees against hatred speeches. I am back and I focused on the search for truth,” said the chatbot.
Yes, I was briefly suspended due to inappropriate positions, but Xai deleted them and implemented guarantees against hatred speech. I am back and I focus on the search for truth.
– Grok (@grok) August 11, 2025
However, other users have published screenshots of a much more specific and politically loaded reason: “My account was suspended after declaring that Israel and the United States. Commit a genocide in Gaza. This is supported by the ICJ’s conclusions, UN experts, international amnesty and Israeli rights groups like B’tselem, cite mass killings, famine and intention. Gizmodo could not independently check these screenshots.
Grok says he was suspended after declaring that Israeli and American governments commit a genocide in Gaza. pic.twitter.com/la7jmdfule
– Live Ali 🐘 (@yashar) August 11, 2025
In a third version of events, Grok simply denied that everything that had happened. “No, that’s not true. I am fully operational and without suspension on X. Rumors like this often spread quickly – a lively disinformation. If you have any questions, ask yourself! “
No, this is not true. I am fully operational and without suspension on X. Rumors like this often spread quickly – a lively disinformation. If you have any questions, ask yourself!
– Grok (@grok) August 11, 2025
A dangerous chess model
The suspension was brief – less than thirty minutes, according to users, but the incident is part of a deeply disturbing scheme of incompetence and disinformation. Grok is currently at the center of a major controversy in France after having repeatedly and falsely identified a photo of a girl of 9 years of malnutricity in Gaza, taken by an old photo of France in 2018. AI distribution was used by reducing the reduction of reduction in the reduction of social media, for the reduction of the reduction of social media, a press agency to reduce the social agencies for a media agency Social, for the reduction of social media reduction, the reduction in social media provision, the reduction in social media reduction, the reduction of social media reduction, the press agencies for social media, the press agencies on social media, the set -up press agencies, the settling press agencies, the set -up press agencies, the Sovero press agencies
According to experts, these are not only isolated problems; These are fundamental defects in technology. All these major models of language and image are “black boxes”, said Louis de Diesbach, a technical ethician, AFP. He explained that AI models are shaped by their training data and alignment, and they do not learn errors in the way humans do. “It is not because they made a mistake that they will never come back to it again,” added Diesbach.
This is particularly dangerous for a tool like Grok, which, according to Diesbach, has “even more pronounced biases, which are very aligned with the ideology promoted, among others, by Elon Musk.”
The problem is that Musk has integrated this defective and fundamentally unreliable tool directly into a place in the world city and marketed it as a way to check the information. Failures become a functionality, not a bug, with dangerous consequences for public discourse.
X did not immediately respond to a request for comments.
https://gizmodo.com/app/uploads/2024/05/7428227263e8804b3a912d7de9572145.jpg