October 6, 2025

Deepseek launches the GPT-5 competitor optimized for Chinese chips

0
GettyImages-2230086465-e1755799366325.jpg



The Chinese AI startup Deeepseek shocked the world in January with an AI model, called R1, which competed with the best LLM of Openai and Anthropic. It was built at a fraction of the cost of these other models, using much fewer Nvidia chips, and was released for free. Now, just two weeks after the Openai has made its debut, GPT-5, Deepseek is back with an update of its flagship V3 model which, according to experts, corresponds to GPT-5 on certain benchmarks-and is at a strategic price to reduce it.

Deepseek’s new V3.1 model has been discreetly published in a message to one of his groups on WeChat, the China’s all-in-one messaging and social application, as well as on the embraced facial platform. Its beginnings affect several of the biggest accounts of AI today at the same time. Deepseek is an essential element of China’s wider thrust to develop, deploy and control advanced AI systems without counting on foreign technology. (And in fact, the new DEEPSEEK V3 model is specifically set to perform well on Chinese manufacturing fleas.)

While American companies have hesitated to adopt Deepseek models, they have been largely adopted in China and more and more in other parts of the world. Even some American companies have built applications on the DEEPSEEK R1 reasoning model. At the same time, researchers warn that the results of the models are often closely to the stories approved by the Chinese Communist Party – on issues on their neutrality and reliability.

The Pushing of AI to China goes beyond Deepseek: its industry also includes models such as Qwen d’Alibaba, Kimi de Monshot Ai and Ernie de Baidu. The new version of Deepseek, however, coming just after the OPENAI GPT -5 – a deployment that did not meet the high expectations of industry observers.

Openai is concerned about China and the depth

Deepseek’s efforts certainly kept us from the laboratories on their guard. During a recent dinner with journalists, OPENAI CEO Sam Altman said that competition in competition from Chinese open-source models, including Deepseek, influenced his business’s decision to publish his own models open two weeks ago.

“It was clear that if we did not do so, the world was going to be mainly built on Chinese open source models,” said Altman. “It was a factor in our decision, for sure. It was not the only one, but it was looming.”

In addition, last week, the United States granted NVIDIA and AMD licenses to export China-specific AI chips, including NVIDIA H20, but only if they agree to give more than 15% of the income from these sales to Washington. Beijing quickly pushed back, moving to restrict the purchases of Nvidia fleas after the Secretary of Commerce, Howard Lunick, told CNBC on July 15: “We do not sell them our best things, not our second best, not even our third best.”

By optimizing Deepseek for Chinese manufacture chips, the company signals resilience against American export controls and a desire to reduce dependence on Nvidia. In the Deepseek WeChat post, he noted that the new model format is optimized for “new generation domestic fleas that will soon be published”.

Altman, at the same dinner, warned that the United States underestimates the complexity and severity of China’s progress in AI-and have said that export controls are probably not a reliable solution.

“I’m worried about China,” he said.

Less jump, but still striking in increasing advances

Technically, what makes the new Deepseek model notable is how it was built, with some advances that would be invisible to consumers. But for developers, these innovations make the V3.1 cheaper to execute and more versatile than many closed and more expensive rival models.

For example, V3.1 is enormous – 685 billion parameters, which is at the level of many best “border” models. But its conception of the “mixture of experts” means that a fraction of the model is activated when you respond to any request, maintain the IT costs lower for developers. And unlike the previous Deepseek models which divide the tasks to which one could respond instantly according to the pre-training of the model of those which required a reasoning step by step, v3.1 combines both rapid responses and reasoning in a single system.

The GPT-5, as well as the most recent models of Anthropic and Google, have a similar capacity. But few open models have been able to do so so far. The hybrid architecture of v3.1 is “the greatest functionality from afar”, Ben Dickson, technological analyst and founder of the Techtalks blog, said Fortune.

Others point out that even if this Deepseek model is less a jump than the company R1 model – which was a model of distilled reasoning of the original V3 which shocked the world in January, the new V3.1 is still striking. “It is quite impressive that they continue to improve non-marginal improvements,” said William Falcon, founder and CEO of the AI ​​Lightning AI developer platform. But he added that he would expect OpenAi to answer if his own open source model “begins to drag significantly” and stressed that the Deepseek model is more difficult to deploy for developers, while the Openai version is quite easy to deploy.

For all the technical details, however, the latest version of Deepseek underlines the fact that AI is increasingly considered to be part of a simmering cold technological war between the United States and China. In this spirit, if Chinese companies can build better AI models for what they claim to be a fraction of the cost, American competitors have reasons to worry about staying ahead.

Presentation of 2025 Global Fortune 500The final classification of the largest companies in the world. Explore this year’s list.


https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2230086465-e1755799366325.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *