The most recent warrior of the Battle strikes a major blow to Big Tech

The current slugfest between technological players flowing to obtain the most intuitive and powerful AI has just received a brief punch with direct elimination.
The slammer who landed?
A new version of the V3.1 increasingly impressive by Deepseek, which has a system of $ 685 billion parameters and can provide approximately $ 1.01 per full coding task, compared to a start price of $ 70 for traditional systems.
🚨 Breaking: Deepseek v3.1 is there! 🚨
The AI ​​giant drops its last upgrade – and it’s great:
⚡685b Settings
Wider context window
Multiple tensor formats (BF16, F8_E4M3, F32)
💻Downloadable now on the embraced face
API awaiting API / InferfeThe AI ​​race has just received … pic.twitter.com/nilcnupkaf
– Deepseek News comment (@Deepsseek) August 19, 2025
Deepseek is no stranger to impress the world. Its R1 model deployed last year and immediately surprised AI observers with its speed and precision compared to its Western competitors, and it seems that V3.1 can follow the plunge.
This prize and the complexity of the service are a direct challenge for the larger and recent border systems of Openai and Anthropic, which are both based in the United States, a confrontation between Chinese and American technological systems has been actively occurring for years, but having a formidable participant of a much smaller company can sound in a new era of challenges. Alibaba Group Holding Ltd. And Moonshot also published AI models that challenge American technology.
“While many recognize the achievements of Deepseek, this represents only the start of the innovation wave of Chinese AI,” said Louis Liang, an investor of the AI ​​sector at Ameba Capital, in Bloomberg. “We are witnessing the advent of AI mass adoption, this goes beyond national competition.”
Why is all this important?
Deepseek’s entire approach on how AI can work is different from how most American technological companies have addressed the idea. This could transform global competition from that which focuses on accessibility instead of power, reports Venturebeat.
It is also difficult to question the giants like Meta and Alphabet by processing a much larger amount of data, which makes a larger “context window”, which is the amount of text that a model may consider when response to a request. This is important for users because it increases the model’s ability to remain understandable in long conversations, to use memory to accomplish complicated tasks that it has done before and to understand how different parts of the text relate to each other.
Most importantly, users love it.
Deepseek v3.1 is already 4th trend on HF with a silent version without model card 😅😅😅
The power of 80,000 subscribers @Huggface (First organization with 100K when?)! pic.twitter.com/ojebfwq7st
– Clem 🤗 (@CliqueDelangue) August 19, 2025
Another major distinction? Deepseek’s V3.1 won a score of 71.6% on the coding reference to help, a major victory since it had just made its debut on a popular AI tool tester hugging the face last night, and almost instantly exploded other rivals like the Chatgpt 4.5 model of Openai, which marked a game of 40%.
“Deepseek v3.1 stimulates 71.6% on Help – No, the seizure of Sota,” tweeted AI researcher, Andrew Christianson, adding that it is “1% more than Claude Opus 4 while being 68 times cheaper.” Successful Place Deepseek in a rare -based company, corresponding to the performance levels previously reserved for the most expensive proprietary systems.
https://gizmodo.com/app/uploads/2025/08/Elon-Musk-and-Sam-Altman-1200×675.jpg