Culture, Not Models, Is Blocking AI From Scaling Inside Companies
As enterprises push AI into production, adoption, and organizational change are proving more decisive than model performance

Enterprise artificial intelligence (AI) has entered a more uncomfortable phase, one defined less by experimentation and more by accountability.
After two years of experimentation, pilots, and proof-of-concept deployments, companies are no longer rewarded for novelty. Boards and executives now want evidence that AI delivers operational value at scale, integrates into existing workflows, and produces outcomes that can be trusted and repeated.
That shift is forcing a recalibration of priorities. Rather than debating which model performs best, enterprises are confronting harder questions around adoption, explainability, and cultural readiness. The technology is advancing rapidly, but organizational systems are struggling to keep pace, turning AI deployment into a management challenge as much as a technical one.
“What happens is actually the adoption and the mindset turns out to be probably one of the most difficult aspects, because the people who are using AI now are probably using it in their home life already. But then how do you bring along the other people on that journey as well?” said Sammy Newman, AI business partner at Vinci Construction.
The comment captures a growing realization across industries.
The limiting factor in enterprise AI is no longer access to models or computing power, but the ability to align people, processes, and incentives around new ways of working.
Benjamin Hickey, director of portfolio management and AI networking at IBM, framed the challenge bluntly.
“If you’ve really got whatever great idea you have, and it doesn’t get adopted, you’ve got to go back and ask yourself the question, how great was it?” he said.
Adoption First
These themes emerged during a panel discussion titled “Future Businesses: AI Real Use Cases Emerging from the Noise” at the AI Summit London earlier this year.
The session was moderated by Andrew Grill, chief futurist at The Actionable Futurist, and brought together executives working directly on AI deployment across construction, enterprise software, marketing, and blockchain.
Rather than showcasing futuristic demos, the panel focused on what happens after pilots end. Speakers described the friction that appears when AI systems are pushed into day-to-day operations, particularly in large organizations with established processes and risk controls.
For Newman, whose role sits at the intersection of technology and frontline construction teams, the most significant obstacles often appear after the technology is ready. Compliance-heavy industries such as construction generate vast amounts of documentation, much of it manually reviewed. AI can accelerate these processes, but only if engineers trust the outputs and understand how to use them.
“An industry average that surprises a lot of people is that around 10% of a project’s value is a cost due to defects,” Newman said. “Quality management processes are often inefficient and cumbersome, and that’s where AI can help, but only if people actually use it.”
He said Vinci Construction’s approach has been to involve domain experts directly in building AI tools, turning them into early advocates rather than reluctant users. That approach also improves the company’s ability to evaluate startups pitching AI solutions.
“We’re now able to ask deeper technical questions instead of taking everything at face value,” he said.
Where Value Shifts
As enterprises move beyond pilots, the center of gravity in AI investment is shifting.
Hickey said the industry has moved past an early assumption that most value would accrue to large language models (LLMs) themselves.
“What we have with LLMs right now is great creative thinking,” he said. “But what we don’t have is strong analytical capability, particularly around time series and operational data.”
That limitation is pushing companies to focus on applications that combine generative models with domain-specific analytics, telemetry, and context.
“The value is no longer accruing to the models themselves,” he said.
He added that competitive advantage is increasingly being created at the application layer, where enterprises embed context, governance, and operational guardrails into AI systems.
This shift has implications for enterprise strategy. Rather than chasing the latest model release, organizations are investing in integration, workflow redesign, and data pipelines that allow AI systems to operate reliably within specific business constraints.
Trust and Risk
As AI systems become embedded in decision-making, trust has emerged as a central concern. Hickey said enterprises are grappling with the reality that modern AI systems are non-deterministic, producing different outputs from the same inputs.
Hickey said explainability means being able to understand how an AI system arrives at an answer and whether that output can be trusted, particularly in cases where hallucination risk is present.
One practical response has been the adoption of retrieval-augmented generation (RAG), which grounds AI outputs in up-to-date, domain-specific information. By constraining what models can draw from, organizations can reduce hallucinations and produce more consistent behavior.
The emphasis on explainability is reinforced by regulation. Panelists pointed to the EU AI Act as an early blueprint for how governments expect companies to manage risk, transparency, and accountability in AI deployments. Even for firms outside the EU, the regulatory direction is influencing internal governance standards.
Staci Warden, chief executive of the Algorand Foundation, said trust challenges extend beyond enterprise workflows into society more broadly.
“The doomsday scenario is that you don’t know what’s real and what’s not,” she said.
Warden said blockchain technology can help address integrity rather than truth.
“If something happened at a certain time and it’s on the blockchain, it cannot be changed. If it’s altered later, there’s a record of that alteration,” she said.
Agents in Practice
Much of the discussion also touched on so-called agentic AI, systems designed to pursue goals autonomously within defined constraints. While the term has become fashionable, panelists stressed that most enterprises are still at an early stage.
Hickey described agents as systems in which code interacts with models via function calls, retrieval mechanisms, and task-specific instructions.
“The challenge is making sure those agents stay within scope,” he said. “You don’t want an infrastructure agent answering questions about baking a cake.”
For Newman, the priority is pragmatic value.
“You can get tons of value without necessarily going full-fledged agentic,” he said.
Newman said simpler RAG approaches can already deliver measurable business value without requiring fully autonomous systems.
Warden took a longer-term view, arguing that truly autonomous agents will eventually need native digital money to operate independently.
“To give agents real agency, they need wallets,” she said, adding that such systems are unlikely to run on traditional banking rails.
Culture Matters
From a marketing and commercial perspective, Miguel Avalos, senior director of global marketing at LinkedIn, said AI’s most immediate impact has been efficiency. Translation, content creation, sales analysis, and customer insights have all accelerated.
“Marketing has been highly affected because it’s about speed,” he said. “But people were reluctant to adopt because they didn’t know how it worked, and they felt they lost control.”
Avalos said successful teams focus on mapping use cases by impact and ease of adoption, rather than chasing every new capability. Over time, evidence of performance helps overcome resistance, but early engagement is critical.
The panelists largely agreed that cultural change, not technical sophistication, will determine which organizations benefit most from AI. Training, internal champions, and clear communication about limits and safeguards all play a role.
As enterprises push AI deeper into operations, the technology itself is no longer the most challenging part. The real work lies in redesigning processes, aligning incentives, and building trust in systems that behave in ways unlike anything organizations have deployed before.


