Forcepoint warns AI-driven data exposure now defines enterprise cyber risk
Enterprises shift from blocking attacks to limiting breach impact as fragmented data systems and weak governance increase ex
Enterprises are increasingly misjudging cybersecurity priorities as artificial intelligence (AI) accelerates data flows, focusing on blocking intrusions at the perimeter rather than limiting how far attackers can move once inside corporate systems.
As corporate data spreads across cloud platforms, SaaS applications, endpoints, and legacy infrastructure, the critical risk is no longer initial access but how much sensitive information can ultimately be exposed, moved, and exploited after a breach.
“If an attacker gets in, how much damage can they actually do? That’s really the question,” Stuart Wilson, director of sales engineering at Forcepoint, told TechJournal.uk in an interview.
“A lot of organizations have created their own security problems over time,” he said. “They’ve deployed technology in silos, and they don’t have a consistent way of identifying or protecting data.”
The shift toward data exposure risk reflects a broader structural weakness in enterprise security. Many organizations lack consistent classification, access controls, and visibility across fragmented systems, increasing the likelihood of large-scale breaches.
In practice, sensitive information is often duplicated across environments without clear ownership, while access rights accumulate over time without regular review. As a result, a single compromised account can expose far more data than security teams anticipate, particularly in hybrid cloud environments where policies are not consistently enforced.
Wilson said this lack of control is compounded by the speed at which organizations have adopted cloud and SaaS platforms over the past decade, often without redesigning security frameworks to match the new operating model.
Security teams often focus on external threats such as phishing or malware, but internal gaps—such as excessive permissions or poorly managed data flows—can amplify the impact of any intrusion.
Data-first strategy
Wilson said these challenges are driving a transition toward data-centric security models, where protection follows information rather than infrastructure.
Forcepoint, a US-based cybersecurity company specializing in data protection, is positioning its platform around this approach as organizations scale artificial intelligence (AI) adoption.
“We’ve been focused on data security for a long time—it’s part of our heritage,” he said. “Our approach is about data security everywhere—across endpoints, networks, cloud and SaaS.”
The company integrates capabilities across endpoints, networks, cloud environments, and SaaS platforms, combining data security posture management (DSPM) and data detection and response (DDR).
DSPM provides continuous discovery and classification of sensitive data, helping organizations understand where critical information resides and how it is exposed. DDR adds a response layer, enabling security teams to detect suspicious activity and take action before data is exfiltrated.
Together, these capabilities are designed to shift security operations from reactive incident response to proactive risk reduction, particularly in environments where data is constantly moving across systems.
“It’s not just about visibility—it’s about control and response,” Wilson said.
Platform expansion
Forcepoint said recent platform updates focus on automating policy creation, improving incident response and strengthening protection across AI-driven workflows.
These include:
• AI-assisted policy generation using natural-language inputs
• Automated detection of anomalous data access patterns
• Faster incident response through integrated analytics
The company said these capabilities aim to reduce response times and improve consistency as organizations manage growing volumes of sensitive data.
Forcepoint’s recent announcements also highlight a broader push to embed AI more deeply into security workflows. The platform’s AI layer is designed to assist analysts by prioritizing alerts, summarizing incidents, and recommending remediation actions, thereby reducing the operational burden on security teams as data and alert volumes increase.
Regulatory pressure
The regulatory landscape is also shaping enterprise security strategies, particularly for companies operating across multiple jurisdictions.
At the UK Cybersecurity Expo, part of Tech Show London, held on March 5 in London, Wilson discussed how global compliance requirements are converging around stricter data governance standards.
“If you operate across multiple regions, you’re going to have to align with the strictest regulations anyway,” he said.
The trend mirrors earlier shifts driven by the General Data Protection Regulation, which forced companies worldwide to adopt higher data protection standards.
Forcepoint's Chief Data Strategy Officer, Ronan Murphy, said in a byline article that the European Union’s AI Act is expected to extend this approach by imposing risk-based requirements on how AI systems are developed and deployed.
The EU AI Act introduces obligations around data quality, transparency, and accountability, particularly for high-risk applications, reinforcing the need for stronger data governance.
Murphy noted that organizations deploying AI systems will need to demonstrate how data is sourced, processed, and protected throughout the application lifecycle. This includes maintaining audit trails, ensuring data integrity and implementing controls to prevent misuse or unintended exposure.
Organizations that fail to meet these standards may face operational restrictions or financial penalties, making early compliance a strategic priority.
Wilson said companies should not wait for domestic regulation to take effect before strengthening their data controls.
“We saw that with General Data Protection Regulation (GDPR), and we’re likely to see the same pattern with AI,” he said.
AI and data trust
Beyond compliance, Wilson said the effectiveness of AI depends on the reliability of the underlying data.
“AI is only as good as the data you put into it,” he said. “If you don’t trust your data, you can’t trust your AI.”
Many organizations remain at an early stage of what he described as a data maturity journey, lacking clear oversight of data ownership, classification, and usage.
“This is really a data maturity challenge,” he said.
Without strong governance, AI systems risk amplifying existing weaknesses, including data leakage and compliance failures.
In particular, generative AI tools introduce new pathways for sensitive data to be accessed, processed and potentially exposed, especially when employees interact with external models or integrate AI into business workflows without clear policies.
Wilson said organizations need to treat AI not just as a productivity tool but as a new layer of data risk that must be governed with the same rigor as traditional systems.
Forcepoint is incorporating AI into its own platform to address these challenges, particularly in automating security operations.
“We’re using AI to help automate policy creation and response,” Wilson said.
The company said its strategy is to enable real-time protection as enterprise environments become more distributed and complex.
As organizations expand digital infrastructure and deploy AI at scale, Wilson said the focus must shift from preventing breaches to controlling their impact.
That requires a clear understanding of data, consistent governance and the ability to respond quickly when incidents occur.



