Openai enters the open source AI race with new reasoning models – while keeping its IP

Despite what his name suggests, Openai had not published a “open” model – which includes access to weights, nor digital parameters often described as the brain of the model – since GPT -2 in 2020. Which changed Tuesday: the company launched a long -awaited open opening model, in two sizes, which says that Optai says that the frontage of open -source.
“We are delighted to make this model, the result of billions of dollars in research, available to the world to put AI in the hands as many people as possible,” said Sam Altman, CEO of Openai about the exit. “As part of this, we hope that this version will allow new types of research and the creation of new types of products.” He stressed that it is “delighted that the world is based on an open AI stack created in the United States, based on democratic values, available for free to everyone and for a large advantage”.
Altman had teased the models to come in March, two months after admitting, following the success of the open models in Deepseek China, that the company had been “on the wrong side of history” when it was a question of opening its models to developers and manufacturers. But although the weights are now public, experts note that the new OpenAi models are hardly “open”. In any case, it gives its crown jewels: the owner architecture, the routing mechanisms, the training data and the methods which feed its most advanced models – in particular the long -awaited GPT -5, widely expected from the exit this month – remain closely under the Wraps.
OPENAI targets manufacturers and developers of AI
The two new model names-GPT-OS-120B and GPT-OS-20B-can be indecipherable for non-engineers, but it is because Openai turns on AI manufacturers and developers who seek to rely quickly on real use cases on their own systems. The company noted that the largest of the two models can work on a single NVIDI 80 GB chip, while the smallest adapts to consumption equipment as a Mac laptop.
Greg Brockman, co-founder and president of OpenAI, recognized a press pre-breakfast call which “has been a long time” since the company had published an open model. He added that it is “something that we consider to be complementary to the other products that we release” and with the owner models of Openai, “combine to really accelerate our mission to guarantee that AGE benefits all humanity”.
OPENAI said that new models work well on reasoning references, which have become key measurements for AI performance, with OPNAI, Anthropic, Google and Deepseek models fiercely in their capacities to combat logic in several stages, code generation and complex problem solving. Since the open source Deepseek R1 has shaken the industry in January with its reasoning capacities at a much lower cost, many other Chinese models have followed suit, including the Kimi models of Qwen and Moshot AI of Alibaba. While Openai said during a press pre-reproduction that new open models are a proactive effort to provide what users want, it is also a strategic response to accelerate open source competition.
In particular, Openai refused to compare its new open models against Chinese open -source systems like Deepseek or Qwen – despite the fact that these models have recently surpassed American rivals on key reasons. In the press briefing, the company said that it was confident in its references against its own models and that it would leave other members of the AI community to test more and “decide”.
Avoid leakage of intellectual property
The new open models of Openai are built using an expert mixture of experts (MOE), in which the system active only “experts” or subnets, it needs a specific input, rather than using the entire model for each request. Dylan Patel, founder of the Semiianalysis research company, underlined in a post on X before the output which formed the models only using known components of architecture – which means that the constituent elements he used are already familiar to the open -source community. He stressed that it was a deliberate choice – that by avoiding owner training techniques or architectural innovations, OpenAi could publish a really useful model without escaping an intellectual property which feeds its owner border models like GPT -4O.
For example, in a model card accompanying the version, Openai confirmed that the models use an expert architecture of experts (MOE) with 12 active experts out of 64, but it does not describe the routing mechanism, which is a crucial and owner part of the architecture.
“You want to minimize the risks for your business, but you want to (also) to be as much useful to the public,” said Aleksa Gordic, former Google Deepmind researcher Fortune, The addition of companies like Meta and Mistral, which also focused on open models, also have owner information.
“They minimize the IP leak and remove any risk in their main profession, while sharing a useful artifact that will allow the start-up ecosystem and developers,” he said. “It is by definition the best that they can do given these two opposite objectives.”
https://fortune.com/img-assets/wp-content/uploads/2025/06/GettyImages-2198353376-e1750435337912.jpg?resize=1200,600