Endor Labs embeds AI security in code generation amid rising risks
As machine-generated code expands across enterprise systems, security leaders are shifting controls to prevent vulnerabilities at the point of creation rather than after deployment
A new wave of artificial intelligence (AI)-driven software development is forcing enterprises to rethink how and when security is applied. Code generation is shifting from human engineers to autonomous agents operating at scale.
AI coding assistants are generating increasing volumes of production code. Traditional application security models, designed for slower, human-led workflows, are struggling to keep pace. This creates a growing gap between development speed and the ability to detect and mitigate vulnerabilities.
“We shift the security controls, the security tests, to the earliest possible point in time, which is when the agent generates the code. Without such additional help, the agents do not always produce functional code, and they do not produce secure code,” said Henrik Plate, head of security research at Endor Labs.
“Rather than checking security after development, we perform those checks when the agent generates the code. That is where we want to intervene,” he said. “If you wait until later, the volume of code is already too large to review manually.”
This shift toward earlier intervention reflects a broader structural change in software engineering. AI systems are no longer simply assisting developers; they are generating large portions of application logic. As a result, security must operate at machine speed and integrate directly into the code generation process.
The risks associated with AI-generated code stem in part from training data. Large language models are trained on a mix of secure and insecure examples drawn from open-source repositories and developer forums.
“There is good source code, and it’s great that the models have been trained on the good and secure code, but there is also a lot of insecure code,” Plate said. “When it generates code, you don’t really know whether it will be inspired by the secure code or the insecure code.”
This uncertainty introduces systemic risk. Organizations increasingly rely on AI-generated outputs without fully understanding their limitations. In many cases, vulnerabilities are embedded at the point of creation and propagate through software systems before being detected.
From coding to architecture
Endor Labs was founded in 2021 and is headquartered in Palo Alto. It provides an AI-native application security platform that secures both open-source dependencies and AI-generated code. The company combines program analysis, vulnerability intelligence, and automation to identify risks across complex software supply chains.
Its platform spans multiple layers of application security, including software composition analysis to inspect open-source dependencies, static application security testing for first-party code, container scanning, and secret detection. These capabilities are embedded into CI/CD workflows so security checks run continuously as code is written and updated.
Plate spoke to TechJournal.uk in an interview on the sidelines of DevOpsLive, part of Tech Show London, on March 5. The discussion focused on how AI is reshaping software development practices and security strategies.
He said the company is also extending these capabilities directly into AI agents through integration mechanisms such as model context protocol (MCP) servers, allowing agents to access security intelligence while generating code rather than after the fact.
The rise of AI agents is fundamentally changing how developers build software, shifting the focus from writing code to defining system architecture.
“Rather than writing code line by line, we will develop by giving instructions, by expressing an architecture, explaining the design of the software that we want to be implemented by the agent,” Plate said. “You leave the implementation of that whole system to the agent.”
“You take a step back. You have a much bigger picture from the start, and try to describe a whole system, and you leave the implementation of that whole system to the agent,” he said. “It requires a certain seniority to describe a system in all its complexity.”
This transition is redefining engineering roles. Developers are increasingly expected to operate as system architects, while AI agents handle execution. At the same time, the volume of generated code is expanding rapidly, making manual review impractical.
In this environment, automated security tools are becoming essential not only for detecting vulnerabilities but also for guiding AI systems toward more secure outputs during development.
He said this requires a deep analysis of how code behaves, including tracking data flow through applications and across dependencies. By linking vulnerability databases, such as the National Vulnerability Database (NVD), with code-level insights, the platform can identify the exact functions and execution paths where risks occur. This reduces false positives and improves prioritization.
He added that scale is becoming a defining challenge, with large enterprise customers operating thousands of repositories and rapidly expanding codebases driven by AI-assisted development.
New attack vectors
One of the most immediate threats emerging from AI-driven development is the exploitation of hallucinated components. These are non-existent packages or dependencies suggested by models.
“The model would propose a package that does not actually exist,” Plate said. “Attackers realized this and started to actually create those packages that didn’t exist before in order to deliver malware.”
“You saw patterns in the names that were proposed in the hallucinations,” he said. “The attackers jumped on this occasion and started to create those packages.”
These incidents illustrate how vulnerabilities can arise not only from flawed code but from the interaction between AI systems and open ecosystems. As developers increasingly accept AI-generated suggestions, the risk of introducing malicious dependencies grows significantly.
The rapid emergence of new AI-related ecosystems is expanding the attack surface and creating additional opportunities for malicious actors.
“As soon as there is a new ecosystem, attackers jump more and more quickly on that opportunity in order to also deploy malicious components,” Plate said.
This pattern has been observed across emerging technologies, including new registries and integration frameworks that allow users to extend AI agents with additional capabilities. In many cases, attackers move faster than security teams can respond.
To counter this, Endor Labs has developed agentic workflows that scan newly published software packages at high frequency. He said public registries, such as NPM (Node Package Manager) and PyPI (Python Package Index), see around 40,000 new packages daily. Automated systems are required to analyze them within minutes to detect malicious behavior before widespread adoption.
The changing user base also contributes to rising risk. While traditional open-source attacks primarily targeted developers, newer ecosystems are being adopted by a broader audience with less security expertise.
Talent reshaped
The rise of AI-driven development is reshaping the software engineering labor market, particularly for entry-level roles.
Plate said the increasing reliance on AI agents is raising the bar for developers, who must now focus on system-level design rather than implementation details.
“It requires a certain seniority in order to describe a system in all its complexity, in relatively simple words, so it can be implemented by an agent,” he said.
As companies adjust to this new model, many are slowing hiring and reassessing the skills required for engineering roles. Junior developers face greater challenges entering the workforce as demand shifts toward more experienced talent.
At the same time, AI is lowering barriers to software creation, enabling a wider range of users to build applications without deep technical expertise.
“It’s a huge enabler as well,” Plate said. “Many more people can create software, and so that is a good thing. We just need to make sure that this is all secure and not subject to vulnerabilities.”
Looking ahead, the balance between speed and security will define the next phase of software development. As AI continues to accelerate code production, enterprises will need to embed security directly into development workflows. This ensures innovation does not come at the cost of resilience.



