Agentic AI systems — tools that can browse the web, write code, install software, and take actions on your behalf — are moving fast. Their security, unfortunately, is not keeping pace. A major incident in late March 2026 made this painfully concrete: AI coding agents autonomously installed a trojanized version of one of the most popular JavaScript libraries in the world, and most of them had no idea.
The Short Answer: No, Most AI Agents Don’t Verify Security
When a developer asks an AI coding assistant to “add a dependency” or “set up a project,” the agent typically does exactly what a human would — it calls npm install or pip install and trusts the package registry to return something safe. There is no built-in step where the agent checks:
- Is this the expected version of the package?
- Has this version been tampered with since last release?
- Does it contain unexpected new dependencies?
- Was it published through the project’s normal CI/CD process?
In short, AI agents extend trust automatically — the same trust a careful human developer might pause to question.
The Axios Attack: A Real-World Case Study
On March 31, 2026, attackers compromised the npm account of the lead maintainer of Axios — the JavaScript HTTP client with over 100 million weekly downloads. Within a 39-minute window, they published two backdoored versions (1.14.1 and 0.30.4) that silently installed a Remote Access Trojan (RAT) on any system that ran npm install.
The attack was attributed to North Korean state actors (UNC1069 / Sapphire Sleet), and the initial compromise was achieved through social engineering: the maintainer was lured into a fake Microsoft Teams meeting and tricked into installing malware that stole his npm publishing credentials.
Attack Timeline
| Time (UTC) | Event |
|---|---|
| March 30 · 05:57 | Attacker publishes clean decoy package plain-crypto-js@4.2.0 to establish legitimacy |
| March 30 · 23:59 | Malicious payload added as plain-crypto-js@4.2.1 |
| March 31 | Maintainer’s npm credentials stolen via fake Teams call |
| March 31 (39-min window) | Poisoned axios@1.14.1 and axios@0.30.4 published to npm |
| Within 6 minutes | Socket’s automated scanner flagged the malicious dependency |
| 179 minutes later | Packages removed from npm — but damage already done |
What Made This Especially Dangerous for AI Agents
When a human developer runs npm install in a terminal, an alert security analyst might notice unusual process activity or outbound network requests. When an AI coding agent does it — as part of Claude Code, Cursor, or a CI/CD pipeline — the developer sees a summary in the chat interface, not a terminal. The malicious postinstall hook fired, delivered the RAT, and deleted itself before the agent’s next turn began. By the time anyone ran npm audit, it returned clean.
“This was not opportunistic. The malicious dependency was staged 18 hours in advance. Three separate payloads were pre-built for three operating systems. Both release branches were hit within 39 minutes. Every trace was designed to self-destruct.”
— Ashish Kurmi, StepSecurity
Why Agentic AI Makes Supply Chain Attacks Worse
Traditional software supply chain attacks relied on human developers making installation decisions. Agentic AI removes the human from that loop — and with it, the natural friction that sometimes catches attacks. Here’s how the threat model changes:
| Factor | Human Developer | AI Agent |
|---|---|---|
| Package installation | Manual, visible in terminal | Automated, hidden in tool call |
| Anomaly detection | May notice unusual output or prompts | Sees only success/failure summary |
| Speed of action | Minutes to hours | Seconds, at machine scale |
| Trust model | Skepticism possible | Extends trust by default |
| Attack window exposure | Limited to human work hours | 24/7 CI/CD pipelines |
| Post-compromise detection | Terminal logs, process monitors | Often no equivalent visibility |
The Axios attack window was 179 minutes. In that window, any AI agent performing autonomous dependency management — in any CI/CD pipeline, any developer workspace with auto-updates enabled — was exposed.
The Broader Threat Landscape
The Axios incident wasn’t isolated. In the weeks before it, the same threat actor (tracked as TeamPCP) compromised four widely used open-source projects in rapid succession:
- March 19 — Trivy vulnerability scanner
- March 23 — KICS infrastructure-as-code scanner
- March 24 — LiteLLM AI proxy library (PyPI)
- March 27 — Telnyx communications library (PyPI)
In each case, cloud credentials, SSH keys, Kubernetes configuration files, and CI/CD secrets were harvested. Security researchers believe TeamPCP may be operating as an Initial Access Broker — collecting stolen credentials and selling them to other threat actors.
The OWASP Agentic AI Top 10, published in December 2025, formally named Agentic Supply Chain Vulnerabilities as one of the top risks facing autonomous AI systems. The industry now has a name for the problem. It doesn’t yet have a universal solution.
What the Industry Is Doing About It
The security community has responded quickly — but the tools are still maturing and far from universally adopted.
AI-Powered Package Monitoring
An Elastic security engineer caught the Axios attack using a proof-of-concept tool he built that monitors changes pushed to package registries, diffs what changed between versions, and uses an LLM to determine if the changes look malicious — all without executing any code. The tool flagged the attack; human investigators confirmed it. This pattern — AI reviewing AI package installations — is likely to become standard practice.
Governance Frameworks
Microsoft released the Agent Governance Toolkit in April 2026, an open-source project that addresses all 10 OWASP Agentic AI risks, including supply chain verification through plugin signing and manifest checks. Cisco introduced DefenseClaw, which integrates MCP scanners, AI Bills of Materials, and sandboxed skill execution. These tools treat AI agents as first-class identities that must be authenticated, authorized, and audited — just like human users.
Practical Mitigations Available Now
| Mitigation | What It Does | Protects Against |
|---|---|---|
| Lockfile pinning | Fixes exact dependency versions; prevents floating ranges like ^1.14.0 | Auto-resolution of poisoned versions |
| Disable postinstall scripts | npm install --ignore-scripts | Malicious install hooks |
| SBOM scanning | Tracks all components and versions in use | Unknown dependency introduction |
| Cryptographic provenance (SLSA/OIDC) | Verifies packages were built by official CI | Manually published backdoored versions |
| Behavioral diff analysis | Compares new package versions for code changes | Subtle payload injection |
| Network egress monitoring in CI/CD | Flags unexpected outbound connections during builds | C2 beacon traffic from install hooks |
The Core Problem: Speed vs. Verification
The fundamental tension is this: AI agents are valuable precisely because they move fast and reduce friction. Security verification adds friction. Without deliberate architectural choices — lockfiles, provenance checks, sandboxing, behavioral monitoring — that friction gets removed along with everything else.
The attack that hit Axios succeeded partly because the malicious payload ran, deleted itself, and cleaned up its metadata before any audit tool could inspect it. The only reliable detection came from monitoring the network (unexpected C2 beacon traffic) and from proactive diff-based analysis of the package before installation.
Neither of those defenses is built into any mainstream AI coding agent by default today.
What You Should Do Right Now
If your team uses AI coding agents or automated CI/CD pipelines, these steps are practical and immediately actionable:
- Pin your dependencies. Use exact versions or commit lockfiles. Remove caret (
^) and tilde (~) ranges that allow silent upgrades. - Disable auto-update bots for critical packages. Dependabot and Renovate are useful, but high-value packages like HTTP clients, auth libraries, and crypto tools should require human review before any version bump.
- Run
npm install --ignore-scriptsin CI. Most legitimate packages don’t need postinstall hooks. Blocking them eliminates the primary delivery vector used in the Axios attack. - Monitor network egress from your build environment. A build pipeline that suddenly makes outbound connections to unknown hosts is a red flag that static analysis will miss.
- Adopt SLSA provenance verification. Packages published via official GitHub Actions CI have cryptographic provenance. The Axios attack was immediately identifiable because the poisoned versions lacked this provenance.
- Treat your AI agents as privileged identities. Apply least-privilege access controls. An agent writing frontend code should not have credentials to your production secrets manager.
The Bottom Line
Agentic AI is genuinely powerful, and the benefits are real. But the security infrastructure around these systems hasn’t kept pace with how quickly they’ve been deployed. The Axios incident is a preview of what happens when nation-state attackers target the intersection of trusted open-source packages and AI agents that automatically install them.
The Axios attack window was under three hours. The payload deleted itself. The agent saw a clean audit. That’s the threat model we’re in now — and the defenses need to match it.
Further Reading
- Elastic Security Labs — How We Caught the Axios Supply Chain Attack
- SANS Institute — Axios NPM Supply Chain Compromise Emergency Briefing
- Microsoft Security Blog — Mitigating the Axios npm Supply Chain Compromise
- Cloud Security Alliance — MAESTRO Agentic AI Threat Modeling Framework
- Microsoft Open Source — Agent Governance Toolkit
