OpenAI-ChatGPT-1200x675.jpg


In each conversation on AI, you hear the same choruses: “Yeah, but it’s incredible”, quickly followed “but that makes things” and “you can’t really trust him.” Even among the most devoted AI lovers, these complaints are legion.

During my recent trip to Greece, a friend who uses Chatgpt to help her project of public contracts say it perfectly. “I like it, but that never says” I don’t know “. It makes you think that he knows it, ”she told me. I asked him if the problem could be his prompts. “No,” she replied firmly. “He can’t say” I don’t know “. It just invents an answer for you. She shook her head, frustrated to pay for a subscription that did not hold her fundamental promise. For her, the chatbot was the one who was mistaken every time, proof that he could not trust.

It seems that Optai listened to my friend and millions of other users. The company, led by Sam Altman, has just launched its brand new model, GPT-5, and although it is a significant improvement compared to its predecessor, its most important new feature may well be humility.

As expected, Openai’s blog post is praise for its new creation: “Our smartest, fastest and most useful model to date, with an integrated thought that puts expert level intelligence in everyone’s hands.” And yes, GPT-5 beat the new performance recordings in mathematics, coding, writing and health.

But what is really remarkable is that GPT-5 is presented as humble. It may be the most critical upgrade of all. He finally learned to say the three words that most ais – and many humans – with: “I don’t know.” For an artificial intelligence often sold on his intellect of God, the admission of ignorance is a deep lesson in humility.

The GPT-5 “communicates more honestly its actions and its capacities to the user, in particular for the tasks which are impossible, sub-sepecified or missing key tools”, affirms Openai, recognizing that the past chatgpt versions “can learn to lie about successfully finishing a task or being too confident on an uncertain response”.

By making her humble AI, Openai has just fundamentally changed the way we interact with her. The company says that the GPT-5 has been trained to be more honest, less likely to agree with you just to be pleasant, and much more prudent to make your way through a complex problem. This makes it the first AI of consumers explicitly designed to reject bullshit, in particular his.

Less flattery, more friction

Earlier this year, many chatgpt users noticed that AI had become strangely sycophantic. No matter what you asked, the GPT-4 would doubt you with a flattery, emojis and enthusiastic approval. It was less a tool and the more a life coach, a pleasant lapdog programmed for positivity.

It ends with GPT-5. OPENAI says that the model has been specifically formed to avoid this behavior that appeals to people. To do this, the engineers trained it on what to avoid, the teacher essentially not being a sycophant. In their tests, these too flattering responses increased from 14.5% of the time to less than 6%. The result? The GPT-5 is more direct, sometimes even cold. But Openai insists that in doing so, his model is more often correct.

“Overall, the GPT – 5 is less effective, uses fewer unnecessary emojis and is more subtle and thoughtful in follow -up compared to GPT – 4O,” says Openai. “It should be less like” talk to AI “and more as to chat with a helpful friend with intelligence at the doctorate level.”

Organizing what he calls “another important step in the AI race”, Alon Yamin, co-founder and CEO of the AI Copyleaks content verification company, thinks that a humble gpt-5 is good “for the relationship of society with truth, creativity and confidence”.

“We are entering an era when distinguishing the fact of manufacturing, the fatherhood of automation, will be both more difficult and more essential than ever,” Yamin said in a statement. “This moment requires not only technological progress, but continuous evolution of reflected and transparent guarantees around the way AI is used.”

OPENAI says that the GPT-5 is much less likely to “hallucinate” or lie with confidence. On invites compatible with web research, the company claims that GPT-5 responses are 45% less likely to contain a factual error than GPT-4O. When using its advanced “reflection” mode, this number increases to a reduction of 80% in factual errors.

Above all, the GPT-5 now avoids inventing answers to impossible questions, which the previous models have done with disturbing confidence. He knows when to stop. He knows his limits.

My Greek friend who writes public contracts will surely be happy. Others, however, can be frustrated by an AI that no longer tells them what they want to hear. But it is precisely this honesty that could finally make it a tool to which we can start to trust, in particular in sensitive fields such as health, law and science.


https://gizmodo.com/app/uploads/2025/05/OpenAI-ChatGPT-1200×675.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *