Google Cloud executive urges focus on core strengths over full-stack AI ownership
Businesses weigh control, scalability and risk as reliance on external tools grows across enterprise artificial intelligence deployments
Companies rushing into artificial intelligence (AI) risk weakening the very advantages that made them competitive in the first place.
As AI moves deeper into core operations, the sharper question for executives is no longer whether to adopt it but where to draw the line between what must stay in-house and what can be handed to partners. The answer begins with a disciplined view of what a business is uniquely good at rather than a broad ambition to become an “AI company.”
“Start with what you want to be famous for, and put all your effort into that, and find other people to deal with the stuff you’re not going to be famous for,” said John Abel, managing director for EMEA in the Office of the CTO at Google Cloud. “Know what you want as an outcome from the technology. Don’t just run into it.”
Abel said AI decisions should be anchored in measurable baselines rather than assumptions. That includes mapping current process performance, identifying where variability or failure occurs and defining what level of improvement justifies automation. Without that groundwork, organizations risk deploying AI into poorly understood workflows, leading to marginal gains or new forms of error that are harder to detect and manage.
“The first thing you need to do as a business is be honest about what your human error level is, because AI is a probability,” Abel said. “Then you can implement a solution, but all of it is based on what you’re trying to get as an outcome.”
He said the same logic applies more broadly across enterprise operations.
AI is well-suited to repetitive processes and tasks that need to scale. But most companies are not starting from scratch. They are carrying years of legacy systems, inherited workflows and existing technology choices. In that environment, trying to modernize everything at once is both unrealistic and wasteful.
“You can’t modernize everything,” Abel said. “You should focus on the things that need to scale, that you’re focused on removing toil and friction.”
Google Cloud, the cloud computing arm of Alphabet, provides infrastructure, data platforms, and AI models that enterprises use to build and scale applications across industries.
Build vs scale
The discussion took place at the AI and Business Innovation Summit in London on March 25, organized by Economist Impact. The session, moderated by Jonathan Birdwell, global head of policy and insights at Economist Impact, examined how businesses should think about AI dependence as the technology shifts from pilots to core strategy.
Abel said the build-versus-buy debate should be approached with strict commercial discipline.
“I don’t want to own the compute, the model and the infrastructure,” Abel said, speaking from the perspective of running a business unit.
That stance reflects the speed of change. A company that decides to own too much of the stack is not making a one-time investment. It is committing itself to constant reinvestment in technologies that may shift again within months.
Partnerships, therefore, make most sense in the layers that are expensive to maintain but do not define a company’s competitive identity. The goal is to avoid tying up capital and management attention in infrastructure battles that do not make a business more distinctive. Abel made a similar argument on timing.
“It’s not about being first to market, it’s about being first to scale,” Abel said. “You position yourself in the industry, saying, ‘We’re doing this.’ You get the headline. But someone else takes a scaling approach, and they get to scale faster than you.”
Pilot projects and minimum viable products (MVPs) can create a false sense of progress. A pilot may generate visibility and reassure the board that something is happening. But once it ends, the harder stage begins: securing more investment, redesigning workflows and building a production system.
A rival that starts later but plans for scale from day one may still move faster where it matters most.
Abel said internal adoption should follow the same principle; rather than launching thousands of agents at once, his team started with six. That figure later rose to 26, while individuals across the business also built their own agents. Leadership should establish safety barriers and allow adoption to expand in a manageable way.
Sovereignty rethink
The conversation also turned to sovereignty, a term that has become more common in European boardrooms as geopolitical uncertainty and regulatory scrutiny intensify.
“It’s less about sovereignty. It’s surviving. You have to be accountable for knowing where your data is and how you survive from it,” Abel said.
Most companies already live with dependencies they cannot easily unwind. They use foreign software, rely on outside infrastructure and carry architectural choices made long before the current AI boom. In that sense, AI does not create the first dependency problem. It exposes how many were already there.
For regulated sectors, the issue is especially sharp.
He pointed out that financial services rules make clear that firms cannot materially outsource risk. That means executives must be able to answer hard questions about where data sits, what happens if a vendor relationship fails and whether systems can keep operating if a tool no longer fits policy or jurisdictional requirements.
He said companies should avoid locking themselves too tightly to a single provider. Abel added that he prefers to design around smaller models and tighter constraints first, then move to larger models only when the use case clearly demands it.
Abel closed by stressing the importance of workforce readiness as AI adoption accelerates.
He said business leaders should retain domain experts and equip them with AI tools, while technologists support execution. The strongest advocates are often those who were initially skeptical and only later discovered where the technology could genuinely help.
He emphasized that governance must evolve alongside adoption, focusing on setting clear safety boundaries as AI use expands.
Finally, executives are advised to resist the temptation to pursue uniform strategies across regions and business units. Different markets face distinct regulatory pressures and risk profiles, so AI deployment must be calibrated locally. A flexible architecture, supported by strong internal governance, allows companies to adapt without rebuilding systems from scratch.



