Softwire CTO: AI FinOps hurdles complicate enterprise AI scaling
Enterprises struggle to control unpredictable costs while moving beyond pilots, even as real-world use cases gain executive buy-in
Artificial intelligence (AI) is moving from experimentation to execution—and the constraint is no longer capability but cost control.
At the center of that shift is AI financial operations, or AI FinOps: the discipline of managing and optimizing spending on AI systems, from GPUs and TPUs to model training and inference, to ensure deployments deliver measurable business value rather than runaway costs.
“What model are you using? How many tokens is it burning through? Is that a linear thing based on the number of users? Are some users doing something that causes a massive increase in token consumption? It’s much harder to predict the costs,” said Tim Benjamin, chief technology officer of Softwire.
“The FinOps around deploying AI at scale is like cloud FinOps, just maybe an order of magnitude harder,” he said.
Unlike traditional cloud workloads, where usage patterns can be forecast with reasonable accuracy, AI introduces volatile, behavior-driven consumption that can change rapidly across users and use cases—turning scaling decisions into an ongoing financial risk management challenge.
Traditional cloud systems allow organizations to estimate costs with reasonable confidence, Benjamin said. Over the years, enterprises have built financial operations practices that link usage metrics with predictable spending patterns. That foundation is now being challenged.
He said AI systems, particularly those built on large language models (LLMs), introduce variability at multiple levels. Token consumption can fluctuate depending on how users interact with the system, how complex their requests are and how workflows are structured. A small subset of users may generate disproportionately high costs, distorting overall projections.
This non-linear behavior creates a new layer of financial risk, he said. Unlike deterministic software, AI systems do not always behave consistently or predictably. The same input patterns can lead to different outputs and resource demands, complicating both budgeting and performance management.
For executives, this shifts the conversation from technical feasibility to economic viability, Benjamin said. It is no longer enough to demonstrate that an AI system works. Companies must prove they can scale sustainably without eroding margins or exposing uncontrolled costs.
In practice, this means financial and technical teams must work far more closely together. Cost modeling, system design and user behavior analysis are becoming tightly interlinked disciplines, he said. The result is a more complex operating environment where scaling decisions carry higher stakes.
Softwire is a UK-based technology consultancy that specializes in software engineering, data, and AI systems for enterprise clients. The firm works across sectors, including transport, finance and media, helping organizations design, build and scale complex digital platforms.
Rail AI use
The discussion took place at the AI and Business Innovation Summit, held in London on March 25 and organized by Economist Impact. The session, moderated by John Ferguson, global lead for new globalization at Economist Impact, focused on how enterprises can move from experimentation to sustainable value creation.
Benjamin illustrated the opportunity with a practical example from the rail industry. He described how travelers often have highly specific and personal requirements when planning journeys, ranging from preferred times to avoiding crowded conditions or coordinating with other passengers.
“People who want to travel by train have lots of strange conditions in their mind about their particular journey,” he said.
“You start to construct your particular set of conditions for your journey. It can factor in all of your requests and find the perfect ticket just for you, or tell you if it’s impossible.”
The system, built as a conversational interface connected to booking platforms, was designed to interpret these nuanced inputs and deliver tailored results, Benjamin said. It effectively replicated the role of human agents, but at a scale that traditional customer service models cannot match.
“This was built as a proof of concept (POC) and taken to the board of a train company, and within literally five minutes, they said, ‘Yeah, we want this. This is the future,’” Benjamin said.
Yet the enthusiasm at the board level did not resolve the core challenge. Moving from a successful demonstration to a fully operational system remains a complex and uncertain process.
“Do you just say that’s done and throw millions of passenger journeys at it? Will it scale? Well, who knows?” he said.
This reflects a broader pattern across industries. Many organizations have run AI POC, but only a small proportion have successfully transitioned them into production environments, he said. The gap between experimentation and deployment remains significant.
There is also an unresolved tension around failed pilots. Some view them as a necessary cost of innovation, while others see them as wasted investment. As enterprises move beyond the initial wave of experimentation, the focus is shifting toward demonstrating measurable returns and operational reliability.
Fragmentation risks
One reason for this gap is the fragmented nature of AI adoption within organizations, Benjamin said. Companies are increasingly adopting point solutions, often introduced by startups that can now sell directly to enterprises more easily.
“There are lots of point solutions coming into the enterprise, largely brought in by startups,” Benjamin said. “This is a time that is pretty much unprecedented for startups arriving and selling directly into the enterprise.”
At the same time, employees are using AI tools outside official systems, creating what he described as “shadow AI.” These unmanaged applications, often running on personal devices, raise concerns about data security, compliance and consistency.
Beyond technology, organizational readiness is emerging as a critical factor. Companies are no longer navigating a one-off transformation, but a continuous cycle of change driven by rapid advances in AI.
“The pace is so fast that we need to structure organizations to be ready for constant change,” Benjamin said. “We may always be in that storming phase, and that can be quite uncomfortable.”
This requires closer alignment between technical teams and business leaders, as well as new operating models that can absorb ongoing disruption.
Governance frameworks must evolve in parallel. As AI systems take on more decision-making roles, organizations need clear accountability structures and mechanisms to explain outcomes.
“You need to know who is responsible for the whole system. You need transparency. You need to be able to explain what the system is doing and how it reached a decision,” he said.
He highlighted the importance of guardrails at both the output level and within system permissions, ensuring that AI systems operate within defined boundaries.
As enterprises push toward large-scale deployment, the challenge is no longer whether AI can deliver value, but whether organizations can manage the complexity that comes with it. The next phase of adoption will depend on aligning economics, operations and governance into a coherent strategy.



