Employee-built AI agents force a rethink of enterprise governance
As enterprises scale autonomous systems, employee-led deployment is reshaping control, risk management, and internal AI governance models
Enterprise AI adoption is no longer being driven by central IT teams or carefully sequenced pilot programs. Instead, employees are building and deploying their own AI agents, quietly reshaping how large organizations govern, control, and scale artificial intelligence across everyday work.
That shift is forcing companies to rethink how they introduce, govern, and measure AI systems. Rather than treating agents as another layer of enterprise software, early adopters increasingly see agentic AI as an operating model that spreads horizontally across teams and challenges traditional top-down control.
“There’s no way we can manage this top down. It has to be a process of collective discovery,” said Euro Beinat, global head of AI and executive vice president at Prosus Group. “You need to put these tools in the hands of everybody, with the proper security, safety, and privacy, but this has to come from the bottom.”
Employees are best positioned to identify where agents create value because they understand their own workflows and constraints.
“People know their own workflows and the jobs to be done. I can’t come in and do it for you. You have to do it yourself,” Beinat said.
The result is a model in which AI adoption accelerates through experimentation rather than prescription. Instead of rolling out narrowly defined use cases, companies are enabling employees to assemble agents that connect to internal systems, automate repetitive tasks, and surface insights that would otherwise go unnoticed.
According to Beinat, the bottom-up momentum has already reached a meaningful scale.
“At this moment, we have about 4,000 of these agents roaming around our group,” he said. “About 10% are created by us because they are complex, and 90% are created by people.”
That distribution highlights how quickly agent creation shifts away from centralized teams once the underlying tools are made accessible. Employees are not only using agents but also building, sharing, and adapting them as new needs emerge.
To support that dynamic, Prosus has created lightweight internal mechanisms for discovery and reuse. Employees can publish agents or share build instructions, allowing others to replicate workflows using their own permissions and data access.
In practice, this has created a quiet internal marketplace where successful agent designs spread organically across teams, avoiding the bottlenecks that often emerge when automation requests must pass through a central queue.
Event context
The comments were made during Momentum AI 2025, a two-day enterprise technology conference organized by Reuters in London. The discussion, titled “How will Agentic AI shake out in the enterprise?”, was moderated by Atif Rafiq, chief executive officer and co-founder of Ritual.
Beinat, who sits on the Prosus Group management team, oversees AI strategy across the global consumer internet group. His remit spans both the deployment of AI systems that power consumer-facing platforms and the internal transformation required to make the organization “AI first.”
Founded in 1997 as Myriad International Holdings, Prosus later emerged as a standalone group backed by Naspers and established its global headquarters in Amsterdam in 2019.
The company operates as a global consumer internet group and technology investor, with a portfolio spanning ecommerce, food delivery, payments, classifieds, and education, serving more than two billion customers worldwide across dozens of businesses and markets.
At that level of organizational and geographic complexity, centralized AI control becomes impractical, Beinat said.
From tools to agents
Early internal experiments at Prosus focused on large language models (LLMs) as productivity tools. Employees were given access to conversational interfaces and asked to explore how the technology might support their work.
That approach quickly revealed limitations.
“The system didn’t just need to give answers anymore. It needed to actually do things,” Beinat said.
Agentic systems differ from simple chat-based tools because they can break tasks into steps, reason over them, use external tools, retrieve internal data, and reflect on outcomes before responding.
“Agentic means that you ask something, it breaks the task into components, reasons about it, uses tools, gets data from your systems, reflects on the result, and then gives you the answer you want,” Beinat said.
As those capabilities matured, employees began creating agents that automated personal and team-level workflows. In some cases, agents were shared directly. In others, employees shared the underlying recipes, allowing colleagues to recreate agents using their own data access.
This peer-driven diffusion proved far more effective than centrally defined rollout plans and also shifted how success was measured.
Traditional enterprise AI programs often focus on efficiency gains, such as reducing the time required to complete existing tasks, but that metric alone misses the broader impact of agentic systems, Beinat said.
“If you only measure efficiency, you just get a horse a little bit faster. You never get a car,” he said.
Instead, Prosus tracks whether agents remove bottlenecks that previously constrained work. By eliminating friction points, agents enable employees to operate in entirely new ways rather than simply accelerating old processes.
A further dimension, which Beinat described as agility, concerns whether employees can work outside their traditional domains. Lawyers, for example, are being trained to build simple applications to manage document workflows, while engineers are encouraged to draft policy or compliance materials before formal review.
That cross-functional flexibility, he said, is increasingly critical as business environments change faster than long-term planning cycles can accommodate.
Data limits
Despite rapid adoption, agentic AI also exposes structural weaknesses in enterprise data environments. Many internal systems were designed for human interpretation rather than autonomous reasoning.
“None of the data sets and none of the knowledge bases we’ve encountered so far are designed for agents,” Beinat said.
Seemingly straightforward questions can surface hidden ambiguity.
“If you ask a simple question like how many food delivery orders in Berlin were late last week, the system immediately runs into ambiguity about what ‘Berlin’ actually means,” he said.
Addressing those gaps requires surfacing implicit assumptions and metadata that have long lived only in employees’ heads. Beinat said the process is already underway and should not delay adoption, adding that the absence of perfect data is not a reason to postpone experimentation.
Incentives and risk
One factor accelerating adoption has been explicit incentives. At Prosus, AI proficiency is incorporated into performance goals for many employees, directly influencing compensation.
“Behavior changes when people can see the advantage and are rewarded for using it,” Beinat said.
At the same time, agentic systems introduce new risk categories, from data leakage to unexpected interactions between autonomous agents. Some risks only emerge through use.
“There are new risks we haven’t even imagined yet, and the best we can do is detect them early and react very fast,” he said. “You need intrinsic agility to respond to risks you don’t yet understand.”
Looking ahead, he said, organizations that succeed with agentic AI will be those that balance openness with discipline. Rather than locking down experimentation, they will focus on building adaptive guardrails and cultivating a workforce comfortable with learning by doing.
That mindset may ultimately matter more than any specific technical architecture as agentic AI reshapes how enterprises operate.



