Google DeepMind: next phase of AI automation to redesign operations
Executives face structural overhaul as systems evolve into autonomous collaborators, reshaping workflows, metrics, and leadership roles
Artificial intelligence (AI) is no longer simply enhancing productivity. It is redefining how businesses operate at their core. As companies move from deploying tools to orchestrating autonomous systems, executives are being forced to rethink workflows and the fundamental structure of their organizations.
The shift marks a transition from incremental efficiency gains to structural transformation across the enterprise. Early adoption has focused on automating existing processes. Leading firms are now redesigning operations—aligning decision-making, incentives, and metrics with increasingly autonomous AI systems.
“The unit of production within a business fundamentally changes as a result of AI. The bottleneck moves from being the technology to being your organization and the way that you do things,” said Kareem Ayoub, Vice President of AI technical strategy at Google DeepMind.
Many companies remain in an early phase of adoption, applying AI to existing structures without rethinking how work is organized. They continue to pursue existing goals using unchanged processes, thereby limiting the technology's broader impact.
“Leaders are deploying AI in the same organization with the same people, for the same goals, with the same processes. That is building a faster horse. There is extraordinary value in that, but the fundamental insight is that the unit of production changes,” he said.
The implications are already visible in early case studies across industries. Ayoub pointed to a law firm that shifted its focus from billable hours to customer satisfaction, resulting in sustained year-on-year revenue growth of 30% to 40%. The example underscores how AI can reshape not only execution, but also what businesses choose to optimize.
Agentic shift
The discussion took place at the AI and Business Innovation Summit in London on March 25, organized by Economist Impact. The session was moderated by Jonathan Birdwell, Global Head of the Policy and Insights Team at Economist Impact, who framed the debate around the transition of AI from assistive tools to autonomous systems.
Ayoub outlined a near-term evolution toward more advanced “agentic” capabilities. These systems operate independently, collaborate with one another, and increasingly execute tasks without continuous human oversight. He said these developments will accelerate over the next one to two years and fundamentally change how work is executed.
“You will start to have teams of agents that are able to collaborate with each other, decomposing complex problems into different parts,” Ayoub said. “The second capability is headless agents, where you can send work off and check back later, and the third is self-learning agents.”
He said combining these capabilities can deliver strong outcomes.
Ayoub also drew a distinction between autonomy and what he called asynchronous execution. In practice, he said, companies should think less about unsupervised systems and more about how long an agent can run before a human needs to check in. That depends on how verifiable the goal is.
If an agent is handling a software task, for example, it can write tests and work toward completion with relatively limited intervention. If it is producing a marketing campaign, executives are likely to review the work more frequently because tone, brand, and judgment are harder to verify.
Ayoub suggested that this gives companies a practical way to evaluate deployment: reduce the number of interventions needed to move from response to goal.
He cited a recent example in which a researcher deployed 16 agents running continuously for two weeks to rebuild a complex compiler. The work would typically require a full engineering team. The project cost roughly $20,000 in compute resources, illustrating the rapid compression of time and cost enabled by such systems.
Jagged intelligence
Despite these advances, AI performance remains uneven. Ayoub described this as “jagged intelligence,” where systems can outperform humans in complex domains while failing at simpler tasks. He pointed to a coding competition in which more than 150 human teams failed to solve a problem that an AI system completed in under 30 minutes by approaching it from a different strategic angle.
“AI today is really good in some capabilities and falls short in others that seem simpler,” he said. “That jagged intelligence is the central design challenge for every leader.”
He said the same model that invents mathematical theorems may miscount the number of chairs in a room, illustrating the gap in performance. He added that leading companies design around areas where AI performs best.
This approach requires a shift in mindset—from expecting uniform performance to strategically deploying AI where it delivers the most value.
A recurring theme was that technology is no longer the primary constraint.
Organizational design, data infrastructure, and leadership behavior have become the key limiting factors. Companies must ensure their systems are “agent-ready” and avoid building solutions around temporary model limitations, as rapid capability gains can quickly render such investments obsolete.
“The best companies focus on getting their infrastructure and data agent-ready,” Ayoub said. “When models hit silos inside your company, you lose the value of your investment.”
He warned that rapid improvements in model capability can quickly render some investments obsolete.
“The best companies measure outcomes, not activity,” he said. “They invest heavily in identifying the right metric to improve.”
“Raise your hand if you use AI regularly. Now keep your hand raised if you can tell me exactly how that value shows up on your P&L. That gap is probably core to the issue,” he added.
Leadership gap
The most direct challenge was aimed at senior leadership. Executives who do not actively use AI themselves risk undermining their organizations’ ability to adopt it effectively, limiting their ability to evaluate initiatives or guide implementation.
“Leaders not using AI is probably problem number one, two and three within companies today,” Ayoub said.
“If you are managing a portfolio of experiments, it becomes very challenging to assess it if you have not run any yourself,” he said. “It is like steering a ship without a compass.”
He said the best companies have leaders using the technology directly so they understand where it works and where it fails.
Ayoub said that leaders who do not use AI themselves often struggle to accurately judge frontier capabilities. Some remain constrained by legacy systems, while others assess advanced models through free consumer tiers or ask them to perform tasks that were already possible several years ago.
He also pointed to an authenticity gap. When senior leaders visibly use AI in their own work, it signals that experimentation is acceptable across the organization. He said that dynamic played out within his own team: after colleagues noticed that he had been using Gemini heavily, their own use of AI tools rose sharply across engineering, product, assistant and strategy roles.
Looking ahead, AI is expected to become embedded in daily work at every level of the organization. Rather than treating it as a specialized capability, companies will need to integrate it into core operations and decision-making processes.
Ayoub’s closing advice reflected this urgency. Leaders, he said, should begin by applying AI directly to their own work—not as an experiment, but as a routine part of how they operate.
The next phase of AI adoption will not be defined by technological breakthroughs alone, but by how effectively organizations adapt their structures, incentives, and leadership practices to harness its full potential.



