Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


Powered by 1Password
Adding agent capabilities to enterprise environments fundamentally reshapes the threat model by introducing a new class of actors to identity systems. Problem: AI agents move within sensitive enterprise systems, access, retrieve data, call LLM tools, and execute workflows, often without the visibility or control that traditional identity and access systems are designed to provide.
AI tools and autonomous agents are proliferating across enterprises faster than security teams can tool or control. At the same time, most identity systems still accept static users, long-lived service accounts, and crude role assignments. They are not intended to represent delegated human authorities, short-term execution contexts, or agents operating in tight decision loops.
As a result, IT leaders need to step back and rethink the trust layer itself. This change is not theoretical. of NIST Zero Trust Architecture (SP 800-207) clearly states that “all entities, including applications and non-human entities, are invalid until approved and authorized.”
In an agency world, this means that AI systems must have their own public, verifiable identities, rather than operating through legacy or shared credentials.
"Enterprise IAM architectures are built to assume that all system identities are human, which means they rely on consistent behavior, clear intent, and direct human accountability to foster trust." says Nancy Wang, CTO at 1Password and Venture Partner at Felicis. “Agent systems break these assumptions. An AI agent isn’t a user you can train or periodically review. It’s software that can be copied, forked, scaled, and left running in tight execution loops across multiple systems. If we continue to treat agents like people or static service accounts, we lose the ability to clearly express who they are, how much authority they have, and what authority they have.”
One of the first places where these identity assumptions break down is in the modern development environment. An integrated development environment (IDE) has evolved beyond a simple editor into an orchestrator capable of reading, writing, executing, loading, and configuring systems. With an AI agent at the heart of this process, rapid injection transitions are not just an abstract possibility; they become a concrete risk.
Because traditional IDEs aren’t designed with AI agents as a core component, adding aftermarket AI capabilities introduces new types of risk that traditional security models aren’t built to account for.
For example, AI agents accidentally violate trust boundaries. A seemingly innocuous README may contain hidden directives that trick the helper into revealing its credentials during standard parsing. Draft content from untrusted sources can change agent behavior in unexpected ways, even if the content does not clearly resemble the instruction.
Input sources now go beyond managed files. Documentation, configuration files, file names, and tool metadata are accepted by agents as part of their decision-making processes and influence how they interpret the project.
The threat increases when we add highly autonomous, deterministic agents running with elevated privileges, capable of reading, writing, executing, or reconfiguring systems. These agents have no way to determine the context, whether an authentication request is legitimate, who delegated that request, or what boundaries should be placed around that activity.
"With agents, you can’t assume they have the ability to make good judgments, and they certainly don’t have a moral code." Wang says. "Their every move should be properly restricted and access to sensitive systems and what they can do within them should be more clearly defined. The tricky part is that they are constantly taking action, so you have to constantly limit them."
Traditional identity and access management systems operate on several key assumptions that agent AI violates:
Static privilege models fail with autonomous agent workflows: Conventional IAM grants role-based permissions that remain relatively stable over time. But agents perform chains of actions that require different privilege levels at different points. Least privilege can no longer be a set-and-forget configuration. It should now be wrapped dynamically with each move, with auto-expiration and update mechanisms.
Human responsibility for software agents is divided into: Legacy systems assume that each identity belongs to a specific person who can be held accountable for the actions taken, but agents completely blur that line. Now it is not clear when the agent is operating, under whose authority, this is already a big weakness. But the risk increases when that agent reproduces, mutates, or remains inactive after its original purpose has been fulfilled.
Behavior-based detection fails with persistent agent activity: While human users follow familiar patterns of logging in during business hours, accessing familiar systems, and performing actions relevant to their job functions, agents operate continuously across multiple systems simultaneously. This not only increases the potential for system damage, but also causes legitimate workflows to be flagged as suspicious by traditional anomaly detection systems.
Agent identities are often invisible to traditional IAM systems: Traditionally, IT teams can more or less configure and manage the identities that operate in their environment. But agents can dynamically create new identities, act through existing service accounts, or make credentials invisible to conventional IAM tools.
"It’s all about context, the intent behind the agent, and traditional IAM systems don’t have the ability to handle that." Wang says. "This convergence of disparate systems makes the problem broader than a single identity, requiring context and observation to understand not just who is acting, but why and how."
Enabling agency AI requires rethinking the enterprise’s security architecture from the ground up. A few key changes are needed:
Identity as a control plane for AI agents: Rather than treating identity as a security component among many organizations, organizations should recognize it as the primary control plane for AI agents. Major security vendors are already moving in this direction, with identity integrated into every security solution and stack.
Context-aware input as a requirement for agent AI: Policies should be more precise and specific, defining not only what an agent can access, but under what conditions they can access it. This means considering who is calling the agent, what device it’s running on, what time limits apply, and what specific actions are allowed on each system.
Zero-knowledge credentialing for autonomous agents: One promising approach is to keep credentials completely out of sight of agents. Using techniques such as agent autofill, credentials can be embedded into authentication flows without agents ever seeing them in plain text, similar to how password managers work for humans, but applied to application agents.
Audit requirements for AI agents: Traditional audit logs that track API calls and authentication events are not enough. Agent auditability requires capturing who the agent is, under whose authority it operates, what authorizations are granted, and the entire chain of actions taken to complete the workflow. It mirrors the detailed activity logging used for human workers, but must scale for software objects that perform hundreds of actions per minute.
Applying trust boundaries between people, agents and systems: Organizations need clear, enforceable boundaries that define what an agent can do when invoked by a specific person on a specific device. This requires separating intent from execution: understanding what the user wants the agent to achieve and understanding what the agent actually does.
As agentic AI is embedded into everyday enterprise workflows, the security challenge is not whether organizations adopt agents; is whether access control systems have evolved to keep pace.
Blocking AI at the perimeter is unlikely to scale, but it won’t extend legacy identity models. What is required is a transition to identity systems that can account for real-time context, delegation and accountability between humans, machines and AI agents alike.
“The step function for agents in production will not only come from smarter models,” Wang said. “This will come from predictable authority and enforceable trust boundaries. Enterprises need identity systems that can clearly articulate who an agent is acting for, what they’re allowed to do, and when that authority ends. Without that, autonomy becomes an unmanageable risk. With that, agents can be controlled.”
Sponsored articles are content produced by a company that paid for the post or has a business relationship with VentureBeat and is always clearly marked. Contact for more information sales@venturebeat.com.