October 7, 2025

Microsoft AI CEO, Mustafa Suleyman, warns against AI which seems “conscious”

0
GettyImages-2207890426-e1755857766314.jpg



Forget the Doomsday scenarios to reverse the humanity of the AI. What maintains Microsoft AI CEO, Mustafa Suleyman, is a concern about AI systems that seem too alive.

In a new blog article, Suleyman, which has also co -founded Google Deepmind, warned that the world could be on the brink of AI models which are able to convince the users they think, feel and have subjective experiences. He calls this concept “apparently aware” (SCAI).

In the near future, Suleyman predicts that models will be able to organize long conversations, remember past interactions, evoke the emotional reactions of users and potentially make convincing affirmations about having subjective experiences. He noted that these systems could be built with technologies that exist today, twinned “with some that will mature in the next 2 to 3 years”.

The result of these characteristics, he says, will be models that “imitate consciousness in such a convincing way that it would be indistinguishable from an affirmation that you or I could make you mutually on our own consciousness.”

There are already signs that people convince themselves that their AI chatbots are conscious beings and who develop relationships with them that are not always healthy. People no longer only use chatbots as a tool, they behave there, developed emotional attachments and, in some cases, fall in love. Some people are emotionally invested in particular versions of AI models, letting them feel devoid when the developers of AI models bring out new models and interrupt access to these versions. For example, the recent OpenAi decision to replace GPT-4O with GPT-5 was encountered by a shock and anger outcry by certain users who had established emotional relationships with the version of Chatgpt powered by GPT-4O.

This is due in part to the design of AI tools. The most common way that users interact with AI is chatbots, which imitate natural human conversations and are designed to be pleasant and flattering, sometimes to the point of sycophance. But it is also because of the way people use technology. A recent survey of 6,000 regular Harvard Business Review AI revealed that “Company and therapy” were the most common use.

There has also been a wave of reports of “AI psychosis”, where users begin to feel paranoia or delusions on the systems with which they interact. In a example reported by The New York TimesAn accountant from New York named Eugene Torres experienced a mental health crisis after interacting in depth with Chatgpt, leading to dangerous suggestions, especially that he could steal.

“People interact with bots pretending to be real people, who are more convincing than ever,” said Henrey Ajder, an IA and Deepfake expert, said Fortune. “So I think the impact will be the same in terms of who will start to believe it.”

Suleyman fears that a widespread belief that AI could be aware of will create a new set of ethical dilemmas.

If users are starting to treat AI as a friend, a partner or as a type of being with subjective experience, they could say that models deserve their own rights. Claims that AI models are aware or sensitive could be difficult to refute due to the elusive nature of consciousness itself.

A first example of what Suleyman now calls “apparently aware” came in 2022, when Google Blake Lemoine’s engineer publicly said that the company’s unprecedented lamda chatbot was sensitive, reporting that he had expressed his fear of being deactivated and described himself as a person. In response, Google placed him on administrative leave and then dismissed him, declaring that his internal examination found no evidence of conscience and that his claims were “completely unfounded”.

“Consciousness is a foundation of human rights, moral and legal,” said Suleyman in an article on X. “Who / what has it is extremely important. Our objective should be on the well-being and human rights, animals (and) of nature on planet Earth. AI consciousness is a short (and) a slippery slope to rights, benefits, citizenship.”

“If these ais convince others that they may suffer, or that he has the right not to be extinguished, he will come a time when these people will maintain that he deserves protection under the law as an pressing moral affair,” he wrote.

The debates around the “well-being of AI” have already started. For example, some philosophers, including Jonathan Birch of the London School of Economics, have hosted a recent anthropic decision to let his Chatbot Claude put an end to “painful” conversations when users pushed it to abusive or dangerous moral demands, saying that it could trigger a well -necessary debate on the status of potential morale. Last year, Anthropic also hired Kyle Fish as a first full -time researcher of “social assistance”. He was responsible for investigating the question of whether the AI ​​models could have moral meaning and what protective interventions could be appropriate.

But while Suleyman described the arrival of apparently aware AI “inevitable and undesirable”, the neuroscientist and professor of computer neuroscience anil Seth awarded the rise of conscious AI to a “design choice” by technological companies rather than an inevitable stage in the development of AI.

“” Apparently conscious AI is something to avoid. “I agree,” wrote Seth in a post X. “Conscious AI is not inevitable. It is a design choice, and whose technological companies must be very careful.”

Companies have a commercial reason for developing some of the features that Suleyman warns. At Microsoft, Suleyman himself supervised efforts to make the co-pilot product of the company more intelligent emotionally. His team worked to give assistant humor and empathy, teaching it to recognize comfort limits and to improve their voice with breaks and an inflection to make it more human.

Suleyman also co -founded the inflection of AI in 2022 with the express purpose of creating AI systems which promote more natural and emotionally intelligent interactions between humans and machines.

“In the end, these companies recognize that people want the most authentic feeling experiences,” said Ajder. “This is how a company can lead customers to use their products most frequently. They feel natural and easy. But I think it really is in question whether people will start to wonder about authenticity.”


https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2207890426-e1755857766314.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *