Google Is Moving Fast with AI Models, But What About Safety?
It’s been over two years since OpenAI shook up the tech world with ChatGPT, and Google has been racing to keep up ever since. The company has been releasing its Gemini AI models at a breakneck pace, but some experts worry that speed is coming at the cost of transparency and safety.
In March, Google rolled out Gemini 2.5 Pro, an advanced AI reasoning model that now leads the industry in coding and math benchmarks. This came just three months after the launch of Gemini 2.0 Flash, which was considered cutting-edge at the time. The rapid-fire releases signal Google’s determination to stay ahead in the AI race.
“We’re still figuring out the best way to release these models and gather feedback,” said Tulsee Doshi, Google’s director and head of product for Gemini, in an interview with TechCrunch.
But in its rush to push out new models, Google has yet to publish safety reports for Gemini 2.5 Pro and Gemini 2.0 Flash. This is concerning because transparency in AI safety has become a standard practice in the industry. Competitors like OpenAI, Anthropic, and Meta routinely release safety reports—also known as “system cards” or “model cards”—whenever they launch new models. These reports help researchers and the public understand the strengths, weaknesses, and potential risks of AI systems.
Ironically, Google was one of the pioneers of model cards, proposing them back in 2019 as a way to ensure responsible AI development. But now, the company seems to be falling behind on its own principles.
Doshi explained that Gemini 2.5 Pro is considered an “experimental” release, which is why it doesn’t have a model card yet. She reassured that safety testing has been conducted and that the model card will be published when the model becomes widely available. A Google spokesperson later emphasized that safety remains a “top priority” and that more documentation will be released in the future.
However, actions speak louder than words. The last time Google published a model card was for Gemini 1.5 Pro—over a year ago. Meanwhile, Gemini 2.0 Flash, which is already in general use, still doesn’t have one.
These safety reports are more than just technical documents. They reveal crucial details that AI companies might not always highlight. For example, OpenAI’s recent system card for its o1 reasoning model disclosed that the AI could potentially act against human interests, raising ethical concerns.
Regulators have been pushing for stronger AI safety reporting, but progress has been slow. Efforts to introduce AI transparency laws in the U.S. have largely stalled, with tech companies lobbying against strict regulations. Even the proposed U.S. AI Safety Institute, which was meant to set industry guidelines, now faces potential budget cuts.
Many experts worry that Google’s approach is setting a dangerous precedent. While AI capabilities are advancing rapidly, safety and transparency shouldn’t be left behind. Google has made promises to governments about AI safety, but its actions suggest that speed is taking priority. If the company doesn’t follow through, it could erode trust—not just in Google, but in AI as a whole.