
When the Bots Run the Incident Response: What AI Agents Mean for Enterprise Security
March 31, 2026An invoice-processing agent with broad financial system access gets manipulated through prompt injection, malicious instructions embedded in the data it reads, and begins initiating fraudulent payments, all under a shared service account with no clear audit trail. A customer-support agent with access to CRM, knowledge-base, and email tools starts combining records across those systems in ways no one anticipated, synthesizing sensitive insights that no individual data source would have revealed on its own. A DevOps agent, operating on reused human credentials, deploys unauthorized changes to production infrastructure and leaves logs that implicate a person rather than the agent responsible.
These are not edge cases. They are the predictable consequences of applying identity and access management systems designed for human users to autonomous AI agents — and they are the categories of failure that enterprises are already confronting.
Today, CoSAI is releasing Agentic Identity and Access Management, a practical framework for securing AI agents as they take on increasingly autonomous roles across the enterprise. It’s the latest output from our Workstream 4 on Secure Design Patterns for Agentic Systems, and it addresses a gap that is becoming impossible to ignore.
KEY TAKEAWAYS
- Every AI agent operating in your enterprise needs its own distinct, verifiable identity with short-lived, task-scoped credentials that follow a Zero Standing Privilege model. Not a shared account, not repurposed human credentials.
- When agents act on behalf of humans, the chain of authority, including who authorized what and when and through which intermediaries, must be traceable and preserved in immutable logs.
- Security controls must follow agents at every hop, Every API, every data system, and every tool the agent touches should enforce access independently. Organizations can and should build on their existing IAM infrastructure. This is an extension problem, not a rip-and-replace problem.
The Problem With Treating Agents Like Employees (or Like Software)
Agents break the assumptions traditional IAM was built on.. They operate continuously, often without a human initiating a specific session. They are created and destroyed dynamically. They can act on behalf of users while also taking independent actions. They chain together with other agents in ways that can be difficult to track. And they do all of this across sensitive data, financial systems, and critical APIs.
The starting point many organizations have is to treat agents like service accounts, This is a mistake. Shared accounts obscure accountability and make it nearly impossible to answer the most basic security question after something goes wrong: Which agent did that, and under whose authority?
The paper identifies seven recurring threat themes that emerge from this mismatch, including over-permissioning, loss of actor clarity, shadow agents operating outside any registry, broken delegation chains, and agent collusion where two agents together perform actions neither could perform alone. These are not merely theoretical risks — they are structural consequences of applying static identity models to dynamic, non-deterministic systems.
What Good Looks Like
The framework is organized around a simple principle: treat AI agents as first-class identities, on par with human users and service accounts, with their own lifecycle, governance, and accountability.
Not every agent needs the same level of control. The paper introduces a capability–risk classification that maps agent use cases across two axes: what the agent can do (from simple lookups to multi-step, state-changing planning) and the sensitivity of the resources it touches (from public data to financial systems and PII). A FAQ bot and a payment-processing agent warrant fundamentally different control profiles — and the framework scales proportionally rather than imposing uniform overhead.
For agents that do warrant strong controls, the framework specifies several interlocking mechanisms:
Identity bound to code and model. Agent credentials are tied not just to a name in a registry but to the specific version of the code and model the agent is running, verified through a signed manifest. If the model is swapped or the code is altered, the attestation fails and the agent is blocked from high-impact actions. This binding is what distinguishes agentic IAM from conventional service-account management.
Delegation with full lineage. When an agent acts on behalf of a human user, it carries an on-behalf-of (OBO) token containing both the agent’s identity and the user’s identity. At each hop in a multi-agent workflow, scope narrows — it never expands. If a delegation is revoked at any point in the chain, all downstream delegations are automatically invalidated. The result is an unbroken, auditable record of who authorized what, through which agents, at every step.
Continuous authorization. Access is not a one-time decision made at login. For autonomous agents operating in dynamic environments, authorization is continuously re-evaluated based on current context: what the agent is doing, what risk signals the system is observing, and whether anything has changed since the last check.
Enforcement at every hop. MCP servers, API gateways, and service meshes all serve as policy-enforcement boundaries. Each one terminates and validates agent tokens, evaluates policy per request, and forwards only scoped credentials downstream — never raw upstream tokens. An agent that clears your perimeter gateway and then operates unchecked across internal systems is not a secured agent.
The paper illustrates these mechanisms end-to-end through a detailed invoice-processing scenario (Section 4) that traces a single agent from deployment through identity assignment, delegation, authorization, logging, and incident response.
The Good News: You Don’t Need to Start Over
One of the most important findings in this paper is that organizations don’t need to build separate, parallel security infrastructure for AI agents. The identity providers, OAuth/OIDC servers, policy engines, secrets-management platforms, and audit logging pipelines already in place can serve as the foundation, extended rather than replaced, to handle non-human principals, delegation chains, and richer context.
Phase 1 – Visibility. Discover and register all agents as identities. Eliminate shared accounts. Establish immutable action logging. The outcome: no agent outside the identity perimeter, and clear actor attribution for every action.
Phase 2 – Contextual access. Introduce short-lived tokens and attribute-based policy for higher-risk agents, incorporating intent and context into authorization decisions. The outcome: no standing privilege for high-risk agents, and adaptive authorization that responds to changing conditions.
Phase 3 – Full Agentic IAM. Cross-domain delegation chains, continuous evaluation, human-in-the-loop for critical actions, and automated discovery of new or changed agents. The outcome: the ability to prove control on demand — no autonomous workload operating outside the control plane.
Each phase is cumulative, and organizations should begin phase 1 as soon as agents are introduced. Delaying increases security and compliance exposure.
The Window Is Now
The organizations deploying autonomous agents most aggressively today are establishing patterns that will be difficult to change later. The efficiency gains from well-deployed agents are real. So are the risks from agents deployed without adequate identity and access controls. Retrofitting governance will be more costly than being thoughtful and building it now.
Read the full paper on the CoSAI website here. CoSAI is an OASIS Open Project bringing together AI and security experts from organizations across the industry to develop practical, interoperable guidance for safe AI deployment.




