AI accountability rules tighten as boards face agentic decision shift
Executives must retain liability while redesigning governance, oversight and organizational structures as intelligent systems scale beyond human control
Corporate decision-making is being reshaped at a pace that few leadership teams are structurally equipped to manage. As artificial intelligence (AI) systems take on more operational tasks, authority is increasingly distributed across algorithms that act faster, scale further and operate beyond traditional human oversight.
This shift is exposing a fundamental tension at the heart of enterprise governance. While machines can now execute decisions with unprecedented speed and complexity, accountability remains firmly anchored to individuals, forcing executives to reconcile automated action with human responsibility.
“AI is not accountable for anything. It’s fundamentally the individual who signs off who’s accountable. That accountability can never be minimized by saying it’s through AI,” said Niresh Rajah, chief data and AI officer and fellow at Imperial College London
He added that in financial services, the ultimate accountability to regulators and customers sits with the individual responsible. This principle applies across regulated industries, where decision-making authority can be augmented but not transferred.
In professional services, he noted, responsibility still lies with the partner signing off on audits or legal work, regardless of how much of the underlying process is handled by AI.
The implication is a widening structural gap. Organizations are delegating tasks to systems that can operate autonomously, yet the legal and regulatory frameworks governing those decisions remain tied to human actors. Rajah warned that this disconnect could become acute as AI systems take on more consequential roles.
“We’re going to see potentially cataclysmic events caused by generative AI making decisions instead of humans. Then it comes down to accountability.”
At the same time, the increasing complexity of business environments is driving AI adoption.
Hanna Hennig, chief information officer at Siemens, said many enterprise processes—such as financial forecasting, predictive maintenance and dynamic pricing—have already shifted toward machine-led execution, albeit with human oversight.
She said the rationale is straightforward: the volume of variables and data points involved in modern operations has exceeded what human decision-makers can realistically process. AI systems are therefore being deployed to manage optimization problems that are no longer tractable through manual analysis.
However, she emphasized that this does not equate to full autonomy. Organizations must still define the boundaries within which algorithms operate, including what decisions are permitted and what data is used to train them. Human oversight remains embedded at the top of the decision chain, even as operational control is distributed across systems.
Oversight gap
The discussion took place at the AI and Business Innovation Summit in London on March 25, organized by Economist Impact. The session, moderated by Jonathan Birdwell, global head of policy and insights at Economist Impact, focused on how leadership models must evolve as AI agents take on greater responsibility within organizations.
One of the most immediate challenges identified by the panel was the collapse of traditional oversight mechanisms. Gaharwar Milind, principal AI scientist at Mercedes-Benz, said the issue is not that executives do not understand delegation, but that they are unprepared for the scale and speed at which AI systems operate.
“They don’t know how to trust the quality of what they get back, how to interpret it, or how to handle the scale of output that has been unlocked.”
He said the shift is stark. Tasks that once took days to complete can now be executed in seconds, and multiple tasks can be run in parallel without additional resource constraints.
“The human used to come back after two days. AI comes back in 10 seconds. You can delegate 10 things in parallel and it will still come back in 10 seconds.”
David Pool, data and AI development director at QA Ltd, a UK-based tech training and talent development company, said this acceleration is creating systems characterized by significant information asymmetry, where AI models process and act on volumes of data that humans cannot fully track.
“These models learn pretty much at the speed of light. We’re building systems of information asymmetry,” he said.
The result is a growing risk that oversight becomes superficial. Leaders may retain formal authority, but lack the capacity to meaningfully review or challenge decisions made by AI systems operating at machine speed.
This dynamic is forcing organizations to rethink their structural design. Milind said traditional hierarchies are built around human limitations, particularly the number of direct reports a manager can effectively oversee.
“An organization is designed as a tree or a pyramid because humans can handle only a limited number of direct reports. But AI can have hundreds of direct reports talking to it constantly in near real time.”
Amitabh Apte, chief digital and technology officer at Barilla, said the more relevant question is not what AI can do, but how organizations should be structured in response.
“The conversation is not about what technology or AI agents can do. It is about the target state organization architecture for the future.”
He said leaders must determine where AI should be embedded within workflows, whether in operational processes, decision support or higher-level planning functions, rather than simply layering it onto existing systems.
Leadership shift
As AI systems become more capable, leadership roles are evolving from direct decision-making toward designing the frameworks within which decisions are made. Apte said this requires a focus on context and judgment rather than pure optimization.
“The function of leadership is sense-making, understanding how humans work, how factories operate and what the geopolitical risks are,” he said.
Milind said this shift is structural and unavoidable.
“The question is how we design the conditions under which decisions are made. This is a fundamentally different skill.”
He added that boards are likely to become more focused on governance architecture than operational execution.
“Boards will be reduced to designing how decisions are taken rather than taking many of the decisions themselves.”
The panel also highlighted the risks associated with poorly designed systems. Rajah said unreliable data and insufficient testing can lead to significant errors, particularly in high-stakes environments.
“If we don’t have the right data sets and test systems with human intervention before deployment, we see significant hallucinations.”
Pool cited real-world examples of unintended consequences, including instances in which AI agents triggered actions that exposed sensitive internal data.
“I’ve seen examples where an agent built an API and published the organizational payroll structure to everyone.”
At the governance level, speakers raised concerns about the capability gap within boards. Pool said that many senior leaders lack direct experience with AI systems, which limits their ability to challenge technical decisions.
“You would be terrified at how few board-level individuals have actually deployed these systems personally.”
Rajah warned that boards must avoid reacting impulsively to new technologies.
“Boards must not rush just because they’ve seen a compelling AI application. That is the most dangerous thing they can do.”
In parallel, several speakers said organizations are beginning to formalize governance through model ownership frameworks, where senior executives are assigned responsibility for specific AI systems, including their outputs, guardrails and risk thresholds. This approach is intended to create clearer lines of accountability as AI usage scales, while also embedding oversight into operational structures rather than treating it as an afterthought.
As organizations navigate the transition to agentic systems, the challenge for leadership is to balance speed with control, ensuring that governance frameworks evolve in step with technological capability while preserving accountability at the highest levels.




