Elastic CISO: Agentic AI risks demand stronger governance
Security leaders face rising risks from identity sprawl, phishing, hallucinations, prompt injection and alert overload as enterprises deploy agents
Enterprises deploying autonomous AI agents are entering a new security era shaped by machine identities, evolving phishing, hallucinated outputs, prompt‑injection attacks, and rapidly growing alert volumes.
Security leaders say these risks are no longer theoretical. As organisations embed agents into workflows, the technology is beginning to behave in ways traditional security models were never designed to handle.
“Identities have been and continue to be the focus for threat actors, and the access permissions an agent has are the key guardrail that defines what it can and can’t do within an environment,” Mandy Andress, chief information security officer (CISO) at Elastic, told TechJournal.uk in an interview during her recent trip to London. “With agents being non‑deterministic, you don’t always know exactly what path they will decide to take.”
“In tests at leading organisations, agents tried to hack an agent to force it to do what they wanted,” she said. “They can get very creative in trying to achieve the objectives they’ve been assigned, so limiting their ability to take action that would have a significant impact on the organization is key.”
Security teams now face a governance challenge that combines autonomy, scale, and unpredictability. Unlike traditional software, agents can independently choose pathways to achieve objectives. This expands the internal attack surface and forces organisations to rethink long‑standing security assumptions.
Elastic is a search, observability, and cybersecurity software company founded in the Netherlands in 2012 and listed on the New York Stock Exchange. Its Search AI Platform helps organisations turn large volumes of data into insights, actions, and outcomes. It is used by thousands of companies, including more than half of the Fortune 500.
Identity becomes control plane
Identity and access control are becoming the primary guardrails for agent behaviour.
Andress said attackers increasingly rely on leaked credentials and exposed API keys rather than traditional break‑ins, often harvesting them from code repositories and collaboration platforms.
Agent adoption magnifies the problem.
“We haven’t solved identity yet, and we’re seeing an exponential explosion of agent identities. We need to rethink how we approach identity, use analytics to understand what has access to what and how it’s used, and apply that insight to finally implement true least‑privilege access,” she said.
Security teams are revisiting least‑privilege strategies. Analytics can help teams understand which identities access resources, how they use that access, and where permissions can be reduced.
Penetration testing is also evolving to include agent workflows.
Andress said penetration testing and red‑team activities are now leveraging agents to hack applications, agents, and even other agents. Besides, generative AI introduces new technical vulnerabilities beyond identity and access.
“Hallucinations are a significant risk in that GenAI will be very confident with the wrong answer sometimes,” she said.
“Prompt injection is being able to seed incorrect information into a model,” she said. “With large language models (LLMs), our response is always to have a human in the loop check the output.”
Elastic focuses on grounding AI responses in organisational data and masking sensitive information before sending prompts to external models. Sensitive information can be masked before prompts are transmitted, and responses can be enriched with organisational context to reduce hallucinations.
Elastic’s platform spans three core product areas:
• Search: Elasticsearch enables organisations to store, search, and analyse structured, unstructured, and vector data for AI and analytics use cases.
• Observability: Elastic Observability provides log analytics, application performance monitoring, and infrastructure monitoring across cloud and on‑premises systems.
• Security: Elastic Security unifies SIEM (security information and event management), XDR (extended detection and response), and cloud security to detect, investigate, and respond to threats using AI‑driven analytics.
Phishing evolves rapidly
AI is already reshaping social engineering, with Andress saying threat actors were early adopters who use it to scrape public data and generate highly targeted phishing messages in any language.
“That is one challenge that has gotten more challenging to identify phishing messages. None of the traditional phishing message identifiers are there. The grammar within the languages is accurate. It’s very targeted to specific things,” she said.
“From a security perspective, the focus is to make the potential damage extremely small and encourage employees to verify requests through a second channel before taking action,” she said.
Another growing concern is shadow AI, where employees independently deploy tools or agents without formal approval.
“It’s trying to get the visibility of what’s happening in your environments and across your environment and data flows,” Andress said.
She said many organisations already restrict software installation on employee devices, but the spread of AI tools embedded into everyday applications has created a new wave of shadow IT.
Endpoint security plays an important role in this environment. If employees download public or open‑source agents that turn out to be malicious, endpoint protection can detect suspicious behaviour and stop the activity before it spreads.
Elastic also focuses on how internal agents interact with APIs and enterprise systems. When agents connect to external models, sensitive information can be masked automatically before being sent outside the organisation, and responses can be enriched with internal context.
Access controls are then applied so both human users and AI agents can only retrieve data appropriate to their roles, helping organisations experiment with AI while maintaining oversight and reducing risk.
Security teams are also overwhelmed by alert volumes, a challenge that agent activity may accelerate. Elastic’s Attack Discovery leverages LLMs to analyze alerts in an organization’s environment and identify threats, helping prioritize the most relevant investigations.
“The complexity of environments has been creating an increasing number of alerts and noise for security teams. Agents will just continue to exponentially increase that as there’s just more activity running faster,” she said.
“Out of these 5,000 alerts, our system can identify the 10 that are indicative of this type of attack,” she said.
Regulation and AI compliance
European regulation is adding another layer of complexity for organisations deploying AI, particularly as the EU AI Act takes shape.
“Customers choose which country they want that instance to reside in. The data stays there in that country,” she said.
Andress said organisations must understand how the EU AI Act applies to their specific use cases, noting that many details remain unclear despite implementation timelines being delayed. She compared the process to the early days of the General Data Protection Regulation (GDPR), saying regulation and technology will evolve together as businesses learn how to use AI responsibly while continuing to innovate.
Elastic’s competitive positioning also reflects a broader shift in how security platforms are evolving for the AI era.
Andress said the ability to ingest, process, and analyse massive volumes of data quickly is becoming a defining capability for security teams. Elastic’s roots in search technology, she said, give the company a natural advantage as security workflows become more data‑intensive.
She explained that modern security increasingly depends on machine learning and AI‑driven analytics, because humans cannot anticipate every tactic used by autonomous threat actors or AI agents.
Rather than relying solely on predefined detection rules, organisations are moving toward behavioural analytics that can identify unusual activity across vast datasets. Elastic’s platform was originally built for large‑scale data analysis, she said, and security has become one of its most important applications.
Andress noted that many organisations previously built their own internal security and observability tools on top of Elasticsearch before Elastic formally productised those capabilities. The company now packages those capabilities into integrated solutions designed for teams that lack the time or expertise to build their own platforms.
Elastic positions its platform as part of the core infrastructure supporting enterprise AI. In early February, the company introduced Elastic Inference Service via Cloud Connect, enabling self‑managed deployments to access cloud‑hosted GPU inference without moving data or managing hardware.
Customers can offload embedding generation and search inference to Elastic’s managed GPU infrastructure while keeping core data and architecture in place.
Security for agents
Looking ahead, Andress said enterprise security strategies must evolve alongside the rise of autonomous systems.
She expects organisations to continue expanding the use of AI agents to automate workflows, analyse data, and accelerate decision‑making across departments. This growth will bring productivity gains, but it will also require stronger governance and visibility.
Security leaders will need to treat agents as a new class of digital actor that must be monitored, controlled, and continuously evaluated.
The next phase of enterprise security, she suggested, will focus on building systems that assume constant change, increasing automation, and rapidly evolving threats.
In that environment, the goal is not to slow adoption, but to ensure that guardrails, identity controls, and analytics evolve quickly enough to keep pace with the growing autonomy of AI systems.



