Enterprise AI stalls as fragmented data, silos and legacy systems collide
Executives say poor data alignment and weak use cases derail AI value while governance and culture shape scalable deployment

Data fragmentation, legacy systems and siloed architectures are emerging as the single biggest barrier to enterprise artificial intelligence (AI) adoption. Organizations may hold vast amounts of data, yet much of it remains disconnected, inconsistent and difficult to interpret, weakening the foundation required for reliable AI systems.
The issue is not scale but coherence. Disparate datasets—spread across formats, systems and ownership boundaries—often fail to align with real business outcomes. Without that alignment, even advanced AI models struggle to generate meaningful value.
This challenge is particularly acute in large enterprises, where data ownership is fragmented across multiple business units with competing priorities.
“A lot of it is how, in the legacy of the organization, the data has previously been managed, and where all of it is. So fragmentation, silos, incompleteness—if you couple that with a semantic layer and apply it to the right outcome, you get a bit of a perfect storm,” said James Morgan, chief data officer at The Crown Estate, a property manager for the British monarch.
“It’s not just data. It’s structured, unstructured, semi-structured, geospatial and physical. You have to make sure that’s all working together and can be brought together and made sense of,” he said. “You’ve got to do something very powerful with it to be worth the effort.”
Enterprise data environments have evolved over decades, shaped by legacy IT decisions rather than a unified strategy. Morgan said this has resulted in fragmented datasets that are often incomplete or poorly aligned.
Integration becomes not just a technical exercise but a coordination challenge across systems, teams and governance structures.
He said the absence of a consistent semantic layer makes it difficult for organizations to interpret data accurately. Without this foundation, AI systems may produce outputs that are technically correct but contextually irrelevant, limiting their usefulness in decision-making.
As a result, many AI initiatives remain trapped at the pilot stage and fail to scale into production.
Use case drives ROI
The discussion took place during a panel at the AI and Business Innovation Summit in London, organized by Economist Impact. Moderated by Dexter Thillien, lead analyst for technology and telecoms at the Economist Intelligence Unit, the panel focused on the challenges of building AI-ready data and aligning data strategy with AI adoption.
The session brought together senior data leaders from global enterprises. It reflected a broader shift toward treating data as a strategic asset rather than a technical byproduct.
Speakers included Olga Smirnova, head of business data global products at AXA, a global insurance group; Jessica Lachs, chief analytics officer at DoorDash, a US-based food delivery platform; Emily Xue, head of enterprise AI at Scale AI, a provider of AI training data and infrastructure; and Morgan from The Crown Estate. The panel explored how business priorities, data architecture and model design intersect in enterprise AI.
“The challenge is in the use case—what are you going to use the data for? All of this is not relevant unless we really know what business problems we are trying to resolve,” Smirnova said.
“How can a model learn from the data? A model would not learn in the same way that a human would. How would success or patterns make sense? That depends entirely on what you are trying to solve,” she said.
She said organizations often focus on technical challenges such as data quality or model performance before defining the underlying business objective. This can lead to misaligned investments and limited returns.
She added that data preparation, model selection and evaluation criteria should be driven by the intended use case. This helps ensure AI investments are aligned with measurable business outcomes.
Different applications require different data structures and levels of precision. Without clarity on the problem being addressed, it becomes difficult to determine what data is relevant or how it should be processed.
Xue highlighted a mismatch between enterprise data and the data used to train AI models. She said most enterprise data comes from a decade of digital transformation, while model training data has largely been sourced from the internet, human feedback, and, more recently, code and mathematical datasets.
She said this creates a structural gap between what models learn and the data they encounter in enterprise environments. Privacy and regulatory constraints limit access to enterprise data for training, thereby reducing the model's exposure to domain-specific information.
As a result, she said models may struggle with context and nuance in real-world business scenarios, a constraint that becomes more pronounced as enterprises deploy AI in mission-critical workflows.
Autonomy raises risk
“For me, governance is not the problem—it is part of the solution,” Smirnova said.
“Governance should lie with the business, with the people who manipulate the data. No one knows better how to manage data quality than the people who actually use it,” she said.
She said governance should not be treated as a centralized function or an afterthought. Instead, it should be embedded within business processes, with employees acting as data stewards responsible for data quality and integrity.
This decentralized approach can improve accountability and responsiveness across the organization.
This requires a shift in organizational culture. Employees must be trained and incentivized to manage data effectively, ensuring that governance becomes a driver of reliability rather than a constraint on innovation. Organizations that fail to build this culture risk undermining their AI investments.
“The most important thing I found in organizations is the relationship with the person owning the outcome,” Morgan said. “You have to have business process ownership, the tech, the tools, and top-down support. That’s when you really hit the sweet spot.”
“It’s about getting people to start using it—creating an environment where exploration and experimentation are encouraged,” Lachs said.
“We created dedicated time for teams to experiment, fail, and try again in a safe space,” she said. “AI is like a summer intern—overconfident and sometimes wrong. You need to review how it gets to its answers.”
Morgan said close collaboration between technical teams and business stakeholders is essential to ensure AI systems produce relevant and accurate results.
Lachs added that adoption depends on creating an environment where employees feel comfortable experimenting with AI tools and learning from failures. Over time, this can accelerate innovation and improve organizational agility.
As organizations move toward more advanced systems, including agentic artificial intelligence (AI), the importance of data quality becomes even more pronounced.
“Agentic AI amplifies the problems you already have,” Lachs said. “If data quality isn’t there, you don’t have a human in the loop anymore to catch those errors.”
“You’ve got to be really sure what you’re feeding in and what the output will be,” Morgan said. “If you don’t know where the information comes from, you can’t trust it.”
“It depends on your risk appetite and how critical those decisions are,” Smirnova said, adding that in regulated industries, human oversight is likely to remain necessary.
As enterprises push toward greater automation, the ability to balance autonomy with control will define how effectively AI systems can be deployed at scale. In the near term, hybrid models combining human oversight with automated decision-making are likely to dominate enterprise deployments.


