AI governance becomes board-level risk as enterprises deploy AI agents
As autonomous systems scale across enterprises, boards face rising accountability, regulatory exposure, and trust challenges tied to AI deployment
As artificial intelligence systems move from experimental tools to core operational infrastructure, governance is becoming a board-level concern rather than a back-office compliance task. The rise of autonomous and semi-autonomous AI agents is forcing companies to rethink how they manage risk, accountability, and trust at enterprise scale.
For many organizations, the challenge is no longer whether to adopt AI, but how to ensure increasingly complex systems operate within acceptable legal, ethical, and security boundaries. Regulatory pressure is intensifying as deployment cycles accelerate, leaving boards exposed to new operational and reputational risks if governance frameworks fail to keep pace.
“Humans in the loop are even more pertinent than they were yesterday, because now we have hallucinations in the model,” Khushboo Kashyap, senior manager in the information security team at Vanta, told TechJourna.uk in an interview in London. “We have so many things that we have to deal with when AI models run on their own.”
Kashyap began her career as a software engineer before finding her professional focus in cybersecurity. She spent six years advising PwC’s Fortune 500 technology and financial services clients across the United States, Canada, and Australia on cloud security, GRC programs, enterprise risk management, and data governance.
She later joined Rubrik, where she built teams and stood up six GRC functions spanning security governance, risk management, security culture, data governance, business resiliency, and supply chain risk management.
Kashyap said traditional governance and compliance approaches often rely on periodic assessments and evidence collection, a model that breaks down as systems change more frequently and operate continuously.
She said this shift is driving a broader change in how boards view governance. Instead of asking whether a company is compliant at a specific point in time, directors increasingly seek assurance that risks are continuously monitored and that controls remain effective as systems evolve.
She added that customers are asking for greater transparency into how their data is used. At the same time, regulators increasingly expect organizations to explain how controls operate in practice, not simply document them.
Agents reshape oversight
At the heart of the governance challenge is the growing use of AI agents that automate tasks, coordinate workflows, and generate outputs across organizations. While these systems can significantly boost productivity, Kashyap said they also expand the risk surface in ways that are not always obvious.
Kashyap said AI-driven automation can widen the risk surface when systems begin acting across workflows with less direct human involvement. She said organizations need clearer guardrails around data access, decision boundaries, and human oversight as autonomy increases.
She said organizations often underestimate how quickly agent-based systems can multiply once deployed at scale. A single AI-enabled workflow may involve multiple models, third-party vendors, and data sources, each with distinct risk profiles.
She said the growing complexity of AI systems is pushing governance issues to the board level. She said executives need clearer visibility into where AI is deployed, what data it touches, and how incidents would be handled.
Founded in 2018, Vanta is an AI-powered trust management platform that helps companies automate security compliance and manage risk as they scale. The company is headquartered in San Francisco and has expanded internationally, with major hubs in London and Dublin supporting its growing European and EMEA customer base, as well as offices in New York and Sydney.
Kashyap said Vanta’s differentiation is not based on adding isolated AI features, but on embedding AI agents across a unified GRC platform. She said many newer AI firms address narrow problems in areas such as questionnaires or policy reviews, while large consultancies tend to rely on people-intensive service models.
“Their tools and technologies are more around how to make the lives of the consultants easier,” Kashyap said. “It’s not about providing them services and having a revenue stream attached with adding more people to the problem.”
She added that this approach reflects a deliberate contrast with traditional advisory firms, whose technology investments are often designed to support internal consultants rather than customers. By prioritizing continuous monitoring, automation, and integrated risk management, she said Vanta aims to compete on execution speed, transparency, and operational scale rather than headcount.
Standards and regulations
The regulatory environment is adding urgency to those concerns. In Europe, the EU AI Act introduces a risk-based framework that places new obligations on companies developing or deploying AI systems. While the United Kingdom is pursuing its own approach, the overall direction of travel is clear.
Kashyap noted that emerging regulatory frameworks, including the EU AI Act and UK GDPR (General Data Protection Regulation), are raising expectations that organizations can demonstrate responsible AI use. In her view, this places greater emphasis on auditable, explainable, and adaptable governance structures.
ISO 42001, the international standard for AI management systems, is increasingly seen as a practical foundation for enterprise AI governance.
Kashyap pointed out that aligning internal governance with recognized standards can help organizations prepare for future regulation while providing a common language for boards and regulators.
She explained that the standard encourages organizations to approach AI risk holistically, covering governance, risk assessment, controls, and continuous improvement. It is also proving useful, she added, as a reference point for board-level discussions about AI risk and accountability.
As AI systems assume more complex roles, Kashyap emphasized that governance functions are becoming more dependent on human judgment. AI can accelerate analysis and surface insights, but it cannot replace accountability or context.
Boards are therefore seeking clearer, more integrated explanations of AI risk, including impact, exposure, and mitigation, rather than fragmented updates across security, legal, and compliance teams.
This shift, she said, is pushing AI governance firmly onto board agendas, with directors looking for joined-up views of AI-related risks that connect governance, third-party exposure, privacy obligations, and operational resilience.
She warned that organizations treating AI governance as a narrow compliance exercise risk being caught off guard by regulators or customer demands, as governance increasingly becomes about demonstrating control, transparency, and trust.
She also observed that competition in AI governance is intensifying, driven by fast-moving AI startups and by traditional consulting and accounting firms investing heavily in automation and advisory services.
Looking ahead
Looking forward, Kashyap stressed that the next phase of AI adoption will further test existing governance models as agent-based systems become more autonomous and interconnected. She argued that companies investing early in robust governance frameworks are likely to be better positioned to scale responsibly as AI systems take on more complex roles.
She added that platforms such as Vanta are increasingly positioned to support that shift by helping organizations operationalize AI governance, rather than treating it as a theoretical or compliance-only exercise.
That approach, she noted, is closely tied to the company’s origins and leadership. Vanta was founded by Christina Cacioppo, the chief executive officer and co-founder, after she experienced firsthand the manual, time-consuming nature of security compliance while leading Dropbox Paper.
Cacioppo’s experience, Kashyap said, has shaped how Vanta approaches trust, automation, and governance design, with an emphasis on building these capabilities directly into core workflows rather than layering tools on top of legacy compliance processes.
She also pointed to diversity in leadership as an essential factor in how governance decisions are made. Working with women in senior roles, she said, has been a meaningful influence on internal culture and discussion, particularly as AI governance moves into the boardroom.
As AI systems scale across industries and societies, Kashyap said that diversity of thought is becoming increasingly important in determining how responsibly those systems are built, deployed, and governed.



