Google published a technical report on its latest AI model, Gemini 2.5 Pro, weeks after its launch, but experts say the report lacks key safety details, making it difficult to assess the model’s risks.
The report is part of Google’s effort to provide transparency about its AI models, but it differs from its rivals in that it only publishes technical reports for models it considers to have moved beyond the experimental stage. Google also reserves some safety evaluation findings for a separate audit.
Experts, including Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, and Thomas Woodside, co-founder of the Secure AI Project, expressed disappointment with the report’s sparsity, noting that it doesn’t mention Google’s Frontier Safety Framework (FSF), introduced last year to identify potential AI risks.
Wildeford said the report’s minimal information, released weeks after the model’s public launch, makes it impossible to verify Google’s public commitments to safety and security. Woodside also questioned Google’s commitment to timely safety evaluations, pointing out that the company’s last report on dangerous capability tests was in June 2024, for a model announced in February 2024.
Moreover, Google hasn’t released a report for Gemini 2.5 Flash, a smaller model announced last week, although a spokesperson said one is “coming soon.” Thomas Woodside hopes this indicates Google will start publishing more frequent updates, including evaluations for models not yet publicly deployed.
Judge rules Google holds illegal advertising monopoly
Other AI labs, such as Meta and OpenAI, have also faced criticism for lacking transparency in their safety evaluations. Kevin Bankston, a senior adviser on AI governance at the Center for Democracy and Technology, described the trend of sporadic and vague reports as a “race to the bottom” on AI safety.
Google has stated that it conducts safety testing and “adversarial red teaming” for its models before release, even if not detailed in its technical reports.