British legislators accuse Google of “confidence violation” on the Gemini security report 2.5 pro AI delayed

0
GettyImages-2198130037.jpg



A group of 60 British legislators signed an open letter accusing Google Deepmind of having violated its commitments in AI security with the release of Gemini 2.5 Pro. The letter, published by the political militant group Pauseai, accuses the IA company of breaking the security commitments of the border AI which he signed at an international summit in 2024 by not publishing the AI ​​model with key security information.

At an international summit co-hosted by the United Kingdom and South Korea in February 2024, Google and other signatories promised to “publicly report” the capacities and risk assessments of their models, as well as to disclose whether external organizations, such as the IA government security institutes, had been involved in tests.

However, when the company published Gemini 2.5 Pro in March 2025, the company did not publish a model card, the document which details key information on how the models are tested and built. This despite the affirmations of the company according to which the new model has surpassed competitors on the landmarks of the industry by “significant margins”. Instead, the AI ​​laboratory has published a simplified model card of six pages three weeks after it made the model available publicly as a “preview” version. At the time, an AI governance expert described this report as “meager” and “concern”.

The letter called Google to delay a “inability to honor” the commitment of the company to the summit and “a disturbing violation of confidence in governments and the public”. The letter also challenged what it called a “minimum model card” which lacked “any substantial detail on external assessments”, as well as Google’s refusal to confirm whether government agencies like the United Kingdom of the IA Security Institute (AISI) participated in the tests.

In a declaration sent to Fortune On Friday, a spokesperson for Google Deepmind said that the company supported its “transparent and test and report processes” and carried out its public commitments, in particular the security commitments of the AI ​​of the Seoul border.

“As part of our development process, our models undergo rigorous security controls, including by the United Kingdom AISI and other third -party testers – and Gemini 2.5 is no exception,” said the press release.

When Google published the preview version of Gemini 2.5 Pro, the criticisms said that the missing system that seemed to violate several other promises that the company of IA had made, including the commitments of the White House 2023 and a voluntary code of conduct on artificial intelligence signed in October 2023.

The company had declared in May that a more detailed “technical report” would come later when it is fully available for a final version of the “model family” of Gemini 2.5 Pro entirely accessible to the public. The company seemed to provide a longer report at the end of June, months after the full version was released.

Google is not the only company to sign these promises, then seem to withdraw on safety revelations. Meta’s model card for its LLAMA Frontier model was about as brief and limited in detail as that that Google published for Gemini 2.5 Pro, and it also aroused criticism from IA security researchers.

Earlier this year, Openai announced that it would not publish a technical security report for its new GPT-4.1 model. The company argued that GPT-4.1 is “not a border model”, because its reasoning systems like O3 and O4-Mini surpass it on many benchmarks.

The recent letter calls on Google to reaffirm its commitment to IA security, asking the technological company to clearly define deployment as the point where a model is accessible to the public; Use yourself to publish security evaluation reports on a calendar defined for all future versions of the model; And provide complete transparency for each version by appointing government agencies and independent third parties involved in tests, as well as exact test times.

“If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy an increasingly powerful AI without appropriate guarantees,” Lord Browne of Ladyton, a member of the Chamber of Lords and one of the signatories of the letter, said in a press release.

Presentation of 2025 Global Fortune 500The final classification of the largest companies in the world. Explore this year’s list.


https://fortune.com/img-assets/wp-content/uploads/2025/08/GettyImages-2198130037.jpg?resize=1200,600

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *