AI Investment Bubble Tests Enterprise Adoption Can Deliver Returns
Heavy spending on chips, data centers, and models is colliding with boardroom pressure to prove durable returns beyond experimentation

A surge of global investment has pushed artificial intelligence to the center of boardroom strategy, triggering comparisons with earlier technology booms and busts. Executives and investors are increasingly questioning whether today’s spending reflects sustainable value creation or another bubble that will deflate once expectations collide with operational reality.
Unlike earlier cycles driven primarily by consumer adoption, the current AI wave is being judged inside large institutions that must justify multibillion‑dollar commitments. Banks, utilities, and multinational firms are asking whether AI can deliver measurable gains once deployed across complex, regulated operations rather than remaining confined to pilots and demonstrations.
“We are in an AI bubble, in the sense that there’s investment across the whole ecosystem,” said Sophia Velastegui, board director and chair of the technology and cybersecurity committee at BlackLine Inc, a US-based cloud accounting software provider. “But only a small percentage will become long-term leaders, just as we saw after the dot-com bubble.”
Velastegui said the historical parallel lies not in exuberance alone, but in the uneven survival rate that follows periods of rapid capital inflow. She said many firms will disappear after heavy spending fails to translate into defensible advantage, while a small cohort will use the shakeout to entrench their positions.
“There’s a lot of experimentation going on, with a lot of money being spent,” said David Sully, chief executive of Advai, a UK-based AI assurance and risk management firm. “What everyone is looking for is what’s actually going to stick.”
Sully said early deployments often struggle to demonstrate return on investment because they are layered onto legacy processes without rethinking how decisions are made or risks are managed. He said this gap between ambition and execution is becoming more visible as budgets rise.
Sarah Hammer, managing director and head of trading, digital assets, and artificial intelligence compliance at Charles Schwab Corporation, a major US financial services firm, said the scale of infrastructure buildout has sharpened scrutiny at the executive and board level.
“Last year alone, there were about $37 billion of investment in AI infrastructure, and more than $5 trillion of data-center buildout is taking place right now,” Hammer said. “The challenge is distinguishing durable value from speculation and making sure those investments are aligned with real structural demand.”
Hammer said studies show that many large enterprises are still failing to measure meaningful returns from AI programs, even as spending accelerates. She said this mismatch is forcing organizations to reassess where and how they deploy the technology.
Taken together, the speakers said the coming phase of AI adoption will be defined less by enthusiasm and more by attrition. The distinction between speculation and durability, they said, will determine which companies survive a correction and which are left behind.
AI Governance Limits
These issues were debated at The Global Boardroom event organized by FT Live on December 9, under the theme “The AI Arms Race: How the Technology Is Disrupting Economies.” The discussion, moderated by Murad Ahmed, technology news editor at the Financial Times, brought together executives from finance, energy, and technology to examine whether the current AI surge resembles a transient bubble or a structural shift in how economies operate.
Hammer said many large financial institutions already run hundreds of AI applications, even if few are visible to consumers.
“They’re embedded in operations, compliance, and risk management,” she said. “They’re not flashy, but they’re where value is being created.”
She said this quiet deployment reflects a deliberate strategy to apply AI where it can improve efficiency and decision-making without introducing unacceptable risk.
Sully said the gap between internal deployment and external-facing AI products reflects unresolved concerns around reliability, security, and accountability.
“If a malicious actor can influence outcomes where money is at stake, you’re in a very different risk category,” he said. “The industry hasn’t fully solved those problems yet.”
Velastegui said governance is becoming the dividing line between pilots that impress and systems that can operate at scale. She said organizations that fail to establish controls early risk amplifying errors as AI systems are replicated across the enterprise.
She also pointed to the accelerating pace of AI capability as a complicating factor. She said that when she joined Microsoft in 2017, early large language models (LLMs) operated at a rudimentary level compared with today’s systems, and that the next phase of development will be even faster.
Regulation Bites
Regulated industries illustrate why governance has moved to the forefront of AI strategy.
That regulatory pressure is intensifying on both sides of the Atlantic. In Europe, the European Parliament passed the EU AI Act in March 2024, creating the world’s first comprehensive, legally binding framework for artificial intelligence. The law applies a tiered, risk-based approach, with stricter obligations for systems that pose greater risks to safety and fundamental rights, and it applies to any AI system whose outputs are used in the EU, regardless of where it is developed.
In the United States, the regulatory path is more fragmented but moving in a similar direction. In December 2025, President Donald Trump signed an executive order aimed at creating a federal AI framework and pre-empting conflicting state-level rules, arguing that regulatory fragmentation could hinder innovation. Together, these moves underline how regulation is becoming a binding constraint on AI deployment, particularly in finance, healthcare, and energy.
Hammer said lending decisions provide a clear example of how regulation shapes AI deployment.
“In the United States, there are rules around fair lending and explainability,” she said. “If a system denies credit, institutions need to explain why, and that changes how AI can be deployed.”
She said these constraints are channeling adoption toward data-heavy internal work such as compliance monitoring, document analysis, and reporting, where human oversight remains central and regulatory risk is lower.
Sully said regulators themselves are becoming part of the experimentation process. In the UK, he noted, the Financial Conduct Authority has launched live AI testing environments to help banks explore deployment under regulatory supervision.
“If that cohesion emerges, adoption accelerates,” he said. “If it doesn’t, governance becomes the bottleneck.”
Sully said similar tensions are emerging globally as regulators balance innovation with systemic risk, particularly as AI systems begin to influence financial decisions and customer interactions.
Power And Compute
The AI investment boom is becoming inseparable from questions of energy, infrastructure, and resilience. Rapid expansion of data centers, compute clusters, and network capacity is placing unprecedented demands on power systems, turning electricity supply into a strategic constraint on AI growth.
“AI is increasing pressure on grids, data centers, and energy supply,” said João Nascimento, chief information officer of EDP Group, a Portuguese electric utility. “At the same time, it helps us manage complexity and accelerate the energy transition.”
At EDP, AI is being treated both as a new source of electricity demand and as a tool to manage the shift toward cleaner, more resilient energy systems. Internal forecasts suggest that power consumption from data centers could double by the end of the decade, intensifying pressure on grids while accelerating investment in smarter infrastructure.
Rather than deploying AI indiscriminately, the company has concentrated on a limited number of high-impact initiatives. Around 50 projects have already been delivered, ranging from incremental process improvements to more advanced systems supporting renewable generation and operations.
One example is an AI-based copilot designed to help engineers diagnose operational alarms by drawing on manuals and historical fixes. The system has reduced resolution times and improved asset availability, offering a practical illustration of how AI can support reliability in critical infrastructure.
“We are not just throwing AI at problems,” Nascimento said. “We are rewriting processes so they are controllable, explainable, and scalable.”
Scaling those systems remains more difficult than building prototypes, he added, requiring sustained investment in data quality, workforce training, and organizational change.
Strategic Outlook
Workforce readiness is emerging as the clearest fault line in enterprise AI adoption. Panelists said the speed of deployment is now outpacing organizations’ ability to reskill employees, creating risks not just to productivity but to trust and institutional stability.
“AI should help people do their jobs better,” Hammer said. “Human oversight and accountability remain essential.”
Hammer said firms need to approach AI adoption as an organizational transformation rather than a technology rollout. That requires clear principles from leadership, defined responsibilities, and investment in training before systems are scaled across departments.
At EDP, workforce preparation has been treated as a prerequisite for deployment. Nascimento said more than 65% of the company’s workforce has already been upskilled in AI-related capabilities, ranging from basic literacy to advanced technical training.
“This is not optional,” he said. “If people understand the tools, they can guide how they are used.”
Velastegui said previous technology cycles offer a cautionary lesson. She said the dot-com crash showed how quickly jobs can disappear when companies chase growth without preparing their workforce.
“Lives were disrupted, but governance and strategy determined which companies ultimately grew,” she said.
Sully said the coming year will test whether organizations can translate AI investment into new roles rather than simply reducing headcount. He said firms that combine assurance, training, and clear accountability are more likely to retain talent as AI systems scale.
As AI spending continues, panelists said human capital may prove as decisive as compute or capital. Organizations that fail to bring employees along risk turning technological ambition into operational fragility, even as the tools themselves continue to improve.


