Google is getting AI wrong
Google’s AI strategy, though technically impressive, lacks the cohesive vision, enterprise-grade stability, and governance needed to earn long-term trust from businesses navigating the AI era, says Paul Armstrong
Google’s I/O 2025 event was a spectacle of AI announcements, unveiling more than 100 updates across its product ecosystem. From the introduction of Veo 3, a generative video model, to Jules, an AI coding assistant, and enhancements to the Gemini AI model, the company demonstrated its commitment to embedding AI into every facet of its services. However, beneath the surface of these announcements lies a pressing concern: Google’s AI strategy appears increasingly reactive, raising questions about its suitability as a foundational platform for businesses aiming for long-term stability and innovation.
The integration of Gemini across Google’s suite of products, from Gmail and Docs to Android and Chrome, indicates an attempt to retrofit AI capabilities into existing services. While this approach showcases technical prowess, it lacks a cohesive vision, potentially leading to inconsistencies and integration challenges for enterprise users. A clearly articulated unifying strategy has yet to emerge across its AI initiatives, suggesting a company more focused on catching up than leading the AI frontier (except in scientific research).
Google stands at the forefront of AI development, with its Deepmind breakthroughs and early work in natural language processing setting the tone for the field. Today, however, the company seems locked in a pattern of following the cues of OpenAI and Anthropic rather than defining them. The rebranding of Bard to Gemini, while cosmetically useful, points to an underlying identity crisis and an ability to just get ‘it’. Businesses seeking to build long-term AI infrastructure want clarity on the direction of their tools and partners, something Google has pretty consistently failed to deliver on.
Undoubtedly, Veo 3 represents a significant advancement in AI-generated video, capable of producing synchronised audio and video content. Jules offers developers AI-assisted coding support. While both tools are technologically impressive, their practical applications for businesses remain stunted in many ways (for now at least). Veo 3 had a powerful demo but still lacks widespread utility for most enterprise contexts. Tools like Northell’s MediaMagic are filling the compliance gaps and risks Veo 3 (and others) are leaving companies wide open to. AI code assistants can be valuable, but without rigorous governance, they risk injecting unvetted or insecure code into production environments. The lack of a unified strategy for these tools raises concerns about their long-term viability and integration into enterprise workflows.
Google’s troubling track record
The deeper issue here is how Google handles product development. These tools were not introduced as parts of a clear platform vision. Instead, they feel like features competing for relevance inside a fragmented roadmap. Businesses building internal AI capabilities want predictable scaling paths, documented APIs, stable licensing models, and transparent security postures. Google delivers none of these consistently.
In contrast, competitors like OpenAI and Anthropic have adopted more focused approaches to AI development. OpenAI’s roadmap, particularly with its enterprise-grade offerings, has remained aligned with customer feedback and evolving capabilities. ChatGPT Enterprise offers businesses a customisable assistant with clear privacy guardrails, dedicated capacity, and administrative controls. Anthropic’s Claude models are designed with interpretability and safety as central features, and less afterthoughts. These companies are building platforms where trust is foundational, not optional.
For enterprises, the implications are significant. Building on a platform that lacks clear direction can lead to instability, especially when tools and services are subject to abrupt changes or discontinuation. Google’s track record on this front is troubling. From sunsetting tools like Stadia and Google+, to abandoning flagship initiatives like Inbox or Hangouts, Google has shown a repeated willingness to pivot without regard for downstream impact. For businesses, that volatility is not just inconvenient, it can be a liability.
Nowhere is this more on display (pun intended) than in the rollout of AI Search. The experimental Gemini-powered search overhaul has already begun to reshape how users experience Google Search, but not without cost. As Vox reported, Google’s new AI Overviews are not only glitchy and inconsistent, they may also undermine the company’s core business model. Publishers, already squeezed by algorithmic opacity, now face a future where Google answers questions directly without linking out. For businesses built on visibility and search presence, the commercial implications of this shift are enormous.
Google’s approach may also create new legal and reputational risks. As AI-generated responses become central to user interaction, questions of liability, copyright, and misinformation become harder to manage. The company has not outlined a clear redress mechanism for content creators, and its stance on IP usage in training data remains murky. In a regulated future, vague positioning will not be good enough.
Enterprise buyers are, of course, taking note and sweating, having to make decisions that could have real ramifications for the success of their businesses. Many are now re-evaluating their tech stacks with an eye toward model provenance, contractual clarity, and AI lifecycle management. Tools are no longer judged only on what they can do today, but on how safely and predictably they scale over time. In this context, Google’s lack of transparency on Gemini’s training data, alignment strategy, or long-term pricing makes it a hard sell for risk-averse CIOs.
Undermining dominance
Even the educational ecosystem, where Google once dominated with Chromebooks and Docs, is fracturing. New data is revealing that AI cheating has become so widespread in US schools that many institutions are bringing back blue books, paper-based exams to ensure students actually think. If Google’s AI integrations continue to push output over cognition, it may find itself on the wrong side of a broader societal backlash against mindless automation and killing critical thought.
So what should businesses do in light of this uncertainty? First, they should isolate Google’s AI services from core infrastructure unless contractual terms offer genuine stability and transparency. Second, they should push vendors, including Google, to offer clearer AI governance, including data handling disclosures, model documentation, and output reliability guarantees. Third, they should consider blending best-of-breed AI services instead of defaulting to bundled ecosystems. Will any of this happen? Of course not, or at least not for a few years while businesses catch up and avoid making decisions that may bite them down the line during uncertain geopolitical times.
Google’s scale is often mistaken by many business execs as strategic leadership, which is incorrect. Google remains a company with deep issues thanks to its monopolisation, government contracts and recent political decisions. An engineering juggernaut who has the talent and compute power to compete with any player in the world, and in some ways, the most to lose and the most at risk. But in the AI era, being big alone isn’t enough. Strategy, trust, and integration discipline now matter just as much as model capability. Google’s I/O event was a reminder that momentum can create headlines, but not necessarily long-term value. While there are a lot of reasons to be happy, Google, and parent company Alphabet, are finding themselves attacked from pretty much all sides and that makes them more unpredictable for businesses than in previous years.
Like most large platforms, Google is doing its best with unfinished products, ravenous stakeholders, few imposed guardrails, and slow, unknowledgeable government entities who are eager to not be left behind. Enterprises that want to build responsibly, scalably, and competitively in an AI-saturated future need more than demos and vision statements, they need infrastructure partners who treat AI with the same rigour as cloud, security, or finance systems, more foundational, not experimental. Google has talked a good game at the most recent I/O, but now the company needs to show business it can prioritise continuity, governance and enterprise-grade accountability in order to win over customers who have a lot of options and who don’t want to get it wrong and become dependent.
Paul Armstrong is founder of TBD Group (including TBD+), and author of Disruptive Technologies.