Enterprises Deploy Generative AI to Accelerate Work and Manage Risk
From multilingual customer service to automated IT resolution and on-prem AI grading, generative models are already solving high-impact enterprise problems

As generative AI moves from labs into live production, global companies report tangible gains in speed, productivity, and automation. Whether streamlining product development cycles, scaling multilingual customer support, or automating IT incident response, enterprise leaders are no longer asking if generative AI can help but how fast they can deploy it responsibly.
“Most organizations are very early in the kicking-the-tires phase,” said Christian Reilly, Field CTO for EMEA at Cloudflare. “And what's come before the technology is an understanding of the guardrails that need to be put in place to actually enable that.”
From one of the world's largest internet infrastructure providers, Reilly’s words reflect a growing enterprise mindset: adopt AI quickly, but not carelessly. Across sectors, the conversation is shifting from theoretical use cases to operational efficiency, customer value, and managed risk.
At Revolut, one of the earliest benefits was speed.
“Using the Gen AI, we can decrease the iteration cycle when we develop our products,” said Nikolay Donets, Head of ML Engineering at Revolut. “As of now, you don't need a data scientist right in the beginning. You just send a request to the API, the model behind in the cloud provides your responses, and then you have this portfolio concept.”
That speed has extended beyond development. “We provide 24/7 support for our customers around the world,” Donets added. “You can ask questions in any language you want and receive a relevant and reliable answer.”
Alexandre Pereira, CEO and co-founder of 2501.ai, said generative AI is increasingly used to automate IT operations, particularly for hybrid cloud customers.
“We try to avoid that someone will wake up at 2 am on a Saturday to fix a server,” he said. “That still happens often, and we try to make that more seamless with AI.”
Revenue-Generating Products
During the AI Rush conference on May 16 in London, the “Unlocking the Future with Cloud and Generative AI” panel focused on enterprise use of generative AI. Alongside Reilly, Donets, and Pereira, the panel included Yu Xiong, Chief Scientist at Datasection Inc. and Associate Vice President at the University of Surrey.
Xiong showcased how AI research from top universities is already powering revenue-generating startups. One company he co-founded with Oxford University uses generative AI to perform investment due diligence.
“We use AI to collect the information online, and also use AI to provide the analytics and the recommendations,” Xiong said. “We generated about half a million pounds in revenue in the past year, and now our valuation is £20 million.”
Another project, built with the University of Surrey, automates academic assessment.
“We work with 40 universities globally,” Xiong said. “We provide the new, invented system, plus generative AI capability, and generate about £200,000 income.”
Meanwhile, Reilly explained how Cloudflare’s Workers AI platform is being positioned to support global developers.
“We recognized very early that training AI models is very GPU-intensive. We chose a slightly different path to go for inferencing as a service,” he said. “At all our Cloudflare locations worldwide, you will be able to use our inferencing as a service by the end of this year.”
The Infrastructure Shift
As deployment accelerates, the infrastructure behind generative AI is evolving fast. Pereira said the early mindset—maxing out on GPUs—has given way to smarter resource allocation.
“Now, there is a different usage that requires a different type of GPU,” he said. “If you are just doing auto-complete, for example, you don’t need a 6 billion parameter engine.”
This diversification is pushing enterprises to rethink hardware and cloud deployment.
“We are seeing a shift with different usage, different needs, and the workload in the cloud; the workload for customers will be different based on what they will do,” Pereira added.
Reilly noted that rising compute demands are reshaping the physical footprint of data centers.
“The next generation of data centers that are going to be heavily GPU-enabled are going to require a significant amount more power and cooling than anything we've ever seen before,” he said.
Reilly predicts small modular nuclear reactors may soon become a viable power source. “They’re incredibly safe and efficient.”
Measuring and Mitigating AI Risk
As generative AI becomes embedded in more workflows, organizations are shifting their approach to risk.
“Now we are more focused on the risks. We start with the risk,” Donets said. “What could go wrong?”
Revolut structures its AI risk framework using both timeline and source.
“The first group is about when it could happen—before or after deployment,” he said. “The second group is about human mistake… or should we call it AI mistake?”
Pereira explained how his company runs intensive model benchmarking to ensure performance and reliability.
“We run them thousands of times during two months with different models,” he said. “Each model has a difference in performance, pollution rate, and destruction rate.”
But even the best benchmarking can’t fully address one of AI's biggest unanswered questions: who is accountable when something goes wrong?
“Who's responsible for the outcome of an AI agent's action?” Reilly asked. “There's no reporting structure for the agent in your HR system. And then, worse still, what happens when the agent does something deliberately?”
Guardrails for an Agentic Future
Looking forward, Reilly warned that the mistakes made in the early internet era, like building key protocols without authentication, are now repeated in the agentic AI era.
“There’s a big attempt now to retrofit basic authentication and authorizations,” he said. “That’s the danger of going as fast as we’re going.”
Reilly pointed to protocols like MCP, which enable agent-to-agent communication, as examples of technology launched without fully baked security layers.
“You’re kind of asking for trouble,” he said. Without authorization, agent systems risk unpredictable outcomes and unclear responsibility.
As enterprise AI scales globally, the need for robust safety frameworks, private deployments, and transparent governance is growing as fast as the models themselves.
Reilly said, “You would be really shocked at the statistics of how big [the insider threat] is. Now multiply that by thousands of software agents operating without transparency.”