Banks Rethink AI Strategy as Data, Culture, and Sovereignty Take Center Stage
Senior banking executives say true AI adoption requires rebuilding institutions, confronting data fragmentation, and cultural resistance

Large financial institutions are beginning to accept that artificial intelligence will not deliver lasting advantage if it is merely layered onto legacy systems. Executives across Europe’s banking sector now frame AI not as an efficiency tool, but as a catalyst for institutional reinvention, forcing banks to rethink how they organize data, make decisions, and govern technology at scale.
Rather than experimenting at the margins, several banks are moving toward a ground-up redesign of processes, infrastructure, and operating models. That shift reflects mounting pressure from AI-native financial institutions, regulatory scrutiny, and the growing realization that fragmented data and entrenched cultures pose greater barriers than model performance.
Astitva Karunesh, data science and AI business lead at Lloyds Banking Group, said reinvention through AI means redesigning how a bank operates end-to-end, rather than adding AI tools onto existing processes. In his view, banks need to rethink their operating models, workflows, and decision-making structures as a single system, rather than deploying AI in isolated teams or functions.
“Unless we start to reimagine the financial organizations in that way, there is no way we can compete with AI-native financial institutions that are coming up,” Karunesh said.
For banks that have accumulated decades of systems and processes, that ambition immediately runs into a harder constraint: data.
“We realized that all this data was completely scattered. You don’t have normalized data, so you had to ingest, then curate, refine, and enhance before building algorithms on top of this,” said Romain Braud, head of digital assets at Arab Bank Switzerland.
Data before models
At an AI Rush panel in London earlier this year, moderated by Peter Lane, co‑founder and chief executive of Jacobi Asset Management, executives from banks across the UK, Switzerland, France, and Ireland discussed the topic Reinventing Large Financial Institutions with AI: Opportunities and Challenges. Speakers described AI adoption as a long‑term transformation effort rather than a discrete technology rollout.
Braud said Arab Bank Switzerland’s early AI initiatives focused on pragmatic automation, including summarizing market information and boosting internal efficiency. As deployments expanded, however, the bank was forced to confront deep structural weaknesses in its data architecture.
“We realized that we had technical debt on multiple solutions,” Braud said. “So we had to step back and look at what we want to achieve.”
Without that control, Braud warned, AI outputs quickly become unreliable—particularly in financial advisory contexts where poor data can lead to flawed decisions.
“You need to control the data input to make sure that you can have tailor‑made output that fits your needs and your clients’ needs,” he said. “So we had to rethink the whole infrastructure and build the data hub.”
The challenge extends beyond internal systems. Managing external data feeds, application programming interfaces, and licensing costs has become a strategic concern in its own right.
“The more you dig in, the more you realize the data‑procurement challenge, how you struggle with API inputs, what you can transform, and how much it costs,” Braud said. “It’s a real fuel of the economy of today and not tomorrow.”
Culture and fear
While infrastructure dominates technical discussions, several panelists argued that organizational culture now represents the largest execution risk. Fear of job displacement, uncertainty over accountability, and deeply ingrained risk‑averse mindsets can slow AI adoption even where technology is available.
Nadia Filali, head of innovation and development at Caisse des Dépôts, said anxiety about AI’s impact on employment is widespread across European financial institutions.
“In France, 60% of people think that AI is dangerous for their job,” Filali said. “Half of the room is really afraid, and half of the room is really excited.”
Rather than forcing adoption, Filali said her organization is focusing on education and engagement—asking staff how AI might change their work and which tasks could be augmented rather than replaced.
“What we try to do is educate people,” she said. “We want to make them understand what generative AI is and where the risks are.”
That effort extends to governance and accountability. As AI systems take on more operational tasks, banks must decide who remains responsible for outcomes and how decisions are audited.
“If people are doing tasks with AI, who is responsible for the task?” Filali said. “We work on governance and are developing a doctrine on that.”
Human in the loop
Across the panel, speakers agreed that fully autonomous decision‑making remains unrealistic in regulated financial environments. While AI can accelerate analysis and automate routine tasks, human judgment remains the final safeguard.
Braud said this was particularly critical when AI systems generate insights for clients rather than internal users.
“What you need to make sure of is that you always have a last line of defense for the source of truth, with people who have the knowledge to control the information that will be released,” Braud said. “You need to make sure that the AI is not hallucinating.”
Karunesh echoed that view, comparing AI‑assisted decision‑making to semi‑autonomous driving systems and stressing that strategic judgment still rests with human decision‑makers.
Beyond operations and culture, the panel highlighted growing concern over data sovereignty and geopolitical exposure. As banks rely more heavily on external cloud platforms and foundation models, control over sensitive financial data becomes a strategic issue.
“Owning your data is your future,” Braud said. “If you deploy your data somewhere else, when you don’t have any control, and someone can dig into it, then your business and your clients are at risk.”
Several speakers noted that while Europe has strong financial innovation and regulatory frameworks, it lags the United States in large‑scale AI model development—raising questions about dependency and long‑term resilience.
From IT project to boardroom agenda
One clear shift, panelists agreed, is where AI discussions now take place. What once sat within technology departments has moved decisively into executive committees and boardrooms.
“We need to think of AI as a strategic lever, and not just another IT initiative,” Karunesh said. “It needs to be a boardroom discussion, not just a discussion that happens within different pockets of the business.”
For banks at an earlier stage, such as AIB, the immediate priority is to build awareness and experiment while simplifying processes before automation.
“If your process is broken, AI will give you a broken solution,” said Enrica Laprocina, UK AI lead at AIB.
As banks navigate this transition, the panel’s message was consistent: successful AI adoption will depend less on chasing the latest models and more on rebuilding foundations—data, culture, governance, and strategy—fit for an AI-driven era.
Several speakers also cautioned against underestimating the pace at which competitive pressure could accelerate these changes.
As AI capabilities mature and costs decline, institutions that delay foundational work may be forced into rushed deployments, increasing operational and regulatory risk.
In that sense, panelists suggested, the real advantage lies not in early experimentation, but in disciplined preparation that allows banks to scale AI responsibly when the inflection point arrives.


