OAuth and OIDC Were Never Designed for AI Agents—Here's What We Built Instead

Abdel Fane
#oauth#oidc#ai-agents#identity#aim#security

OAuth 2.0 and OpenID Connect are two of the most important protocols in modern software. Together, they power authentication and authorization for billions of human users across the web. Every “Sign in with Google” button, every enterprise SSO flow, every third-party app integration—OAuth and OIDC make it work.

But here's the problem nobody in the industry is talking about: AI agents are not humans. And the protocols designed for humans don't just fall short for agents—they leave a dangerous identity gap that grows wider every time you deploy another autonomous agent.

The Protocols We Built for Humans

Let's give credit where it's due. OAuth 2.0 and OIDC solve real, important problems:

OAuth 2.0

Delegated authorization. A human says “I allow this app to access my calendar” without sharing their password. The app gets a token scoped to what the human approved.

Core question: “What did the human allow this app to do?”

OpenID Connect

Identity federation. A centralized identity provider tells a relying party “this is Jane Smith, here's her email, name, and profile photo.” Standard claims, standard flows, SSO everywhere.

Core question: “Who is this person?”

These protocols are elegant, battle-tested, and universally adopted. The problem isn't that they're bad. The problem is that they encode assumptions that don't hold for AI agents.

Five Assumptions That Break with AI Agents

1. “There's a human in the loop”

OAuth's authorization code flow redirects the user to a consent screen. OIDC's authentication flow requires a human to type credentials into a browser. These are core to how the protocols work.

AI agents don't have browsers. They don't click “Allow.” They run as background processes, often in containers or serverless functions, with no human present. The OAuth Client Credentials flow removes the human, but also removes per-agent identity—every agent sharing the same client ID is indistinguishable.

2. “Identity belongs to the person, not the software”

OIDC ID tokens carry claims about humans: sub, email, name, picture. OAuth tokens identify which human granted access.

But when you deploy ten AI agents, each with different purposes and different risk profiles, the identity that matters is the agent's—not the developer who deployed it. You need to know that research-agent-7b is different from customer-support-agent-3a, with different capabilities, different trust levels, and different audit trails. Neither OAuth nor OIDC has a concept of per-agent identity.

3. “Permissions are set at grant time and don't need runtime enforcement”

OAuth scopes are decided when the human clicks “Allow.” After that, the token carries those scopes until it expires. The authorization server doesn't monitor what the app actually does with the access.

AI agents are dynamic. An agent approved for database:read might start attempting database:write if its prompt is manipulated. You need runtime capability enforcement that blocks unauthorized actions as they happen—not static scopes that assume the software will behave as expected.

4. “Whoever holds the token is authorized”

OAuth bearer tokens are exactly that—bearer credentials. Any process that holds the token can use it. If the token is stolen, the thief has the same access as the legitimate holder. There's no proof of who used the token.

For AI agents operating in production, this is a critical gap. If an agent's token is exfiltrated via prompt injection, the attacker has full access. You can't distinguish the attacker's requests from the agent's legitimate ones. What you need is cryptographic proof of identity tied to each agent—not transferable bearer tokens.

5. “Trust is binary: authenticated or not”

In OAuth and OIDC, you're either authenticated or you're not. The token is either valid or expired. There's no middle ground.

AI agents need continuous trust evaluation. An agent might be legitimate at deployment but drift over time as its behavior changes, its MCP servers are tampered with, or it starts accessing resources outside its normal pattern. Trust isn't binary for autonomous systems—it's a spectrum that changes in real time.

The Identity Gap in Practice

Here's what the gap looks like when you compare what OAuth/OIDC knows versus what you need to know about an AI agent:

What OAuth/OIDC tells you

  • • A token was issued to client ID abc123
  • • The human jane@corp.com authorized it
  • • Scopes: read write
  • • Token expires in 3600 seconds
  • • Token is valid (or not)

What you actually need to know

  • • Agent: research-agent-7b (unique identity)
  • • Registered by: jane@corp.com
  • • Capabilities: database:read, api:call (enforced)
  • • Trust score: 0.94 (real-time, 8 factors)
  • • MCP servers: 2 attested, 0 drifted
  • • Behavior: normal (last 24h baseline match)
  • • Every action cryptographically signed (non-repudiable)

The difference isn't incremental. It's a fundamentally different identity model for a fundamentally different type of software.

“But What About Client Credentials?”

This is the most common objection. OAuth 2.0's Client Credentials grant was designed for machine-to-machine communication. It removes the human from the flow. Problem solved, right?

Not for AI agents. Client Credentials gives you a shared secret for a service, not an identity for each agent. Consider:

Shared identity problem

All agents using the same client credentials look identical to the authorization server. You can't distinguish agent A from agent B, can't audit individual actions, can't revoke one without revoking all.

No capability enforcement

Client Credentials tokens carry scopes, but scopes are just strings—there's no runtime mechanism to block an agent from exceeding its declared capabilities.

No trust evaluation

The token is valid or it's not. There's no mechanism to say “this agent's trust score dropped because its behavior changed” and dynamically restrict access.

No non-repudiation

Client Credentials tokens are bearer tokens. If leaked, you can't prove which agent (or attacker) used them. There's no cryptographic proof of identity.

Client Credentials is fine for traditional service-to-service auth with static workloads. It was never designed for dynamic, autonomous AI agents that need individual identity, behavioral monitoring, and runtime enforcement.

What We Built: Agent Identity Management

We started AIM because we hit this gap ourselves. We were building agentic AI systems and realized that no existing protocol or platform answered the basic question: “How do you give an AI agent a verifiable, auditable, enforceable identity?”

AIM doesn't replace OAuth or OIDC. It provides the identity layer that these protocols were never designed to offer for non-human autonomous software. Here's what that looks like:

Cryptographic identity per agent

Every agent gets its own Ed25519 keypair. Actions are cryptographically signed. You can prove exactly which agent did what, and the signature can't be transferred or forged. This is fundamentally different from bearer tokens—it's proof of identity, not proof of possession.

Runtime capability enforcement

Agents declare their capabilities at registration: database:read, api:call, file:write. Every action is checked against the agent's declared capabilities at execution time. If an agent tries to exceed its permissions—whether through prompt injection or behavioral drift—the action is blocked, not just logged.

Continuous trust scoring

An 8-factor trust score that updates in real time based on agent behavior, capability usage patterns, MCP server integrity, compliance status, and more. Trust isn't binary—it's a spectrum. An agent's access can be dynamically restricted as its trust score changes, without waiting for a token to expire.

MCP server attestation

AI agents connect to MCP servers for tools, but those servers can be tampered with or change their tool surfaces without notice. AIM creates cryptographic attestation records for MCP servers and detects drift automatically. If a server's tools change, you know about it before your agents do.

One Line of Code

We didn't want AIM to be another heavyweight security platform that takes weeks to deploy. If securing an AI agent is harder than deploying one, nobody will do it.

from aim_sdk import secure

# That's it. Your agent now has:
# - Cryptographic Ed25519 identity
# - Declared capabilities (enforced at runtime)
# - Continuous trust scoring
# - Cryptographic audit trail
agent = secure("my-agent")

Behind that one line, the SDK registers the agent, generates the keypair, establishes capabilities, and begins monitoring. OAuth token management is handled internally (we use OAuth refresh token rotation for SDK session management—the protocol is excellent for that). But the agent's identity is cryptographic, not token-based.

Not Either/Or—And

We want to be clear: AIM does not replace OAuth or OIDC. The protocols are complementary. Most organizations will use all three:

LayerProtocolIdentity SubjectUse Case
Human authenticationOIDCPeopleSSO, login, user profiles
Human authorizationOAuth 2.0Apps (on behalf of people)Delegated access, scoped tokens
Agent identityAIMAI agentsCryptographic identity, capabilities, trust

In a typical enterprise architecture:

  • OIDC authenticates the developer logging into the AIM dashboard
  • OAuth 2.0 manages the API tokens that the AIM SDK uses internally
  • AIM provides the cryptographic identity, capabilities, and trust scoring for each AI agent

Different identity subjects need different identity protocols. Trying to force AI agent identity into OAuth/OIDC is like trying to authenticate a microservice with a password form—you can hack something together, but you're fighting the architecture.

The Industry Needs a New Identity Primitive

The AI agent market is projected to reach $47 billion by 2030. Every major enterprise is deploying agents. Every AI framework—LangChain, CrewAI, AutoGen, LangChain4j—makes it trivially easy to build agents that can read databases, call APIs, send emails, and modify infrastructure.

Yet the identity layer for these agents is still stuck in the OAuth/OIDC era—or worse, nonexistent. Most agents in production today have:

  • No individual identity (shared API keys or OAuth client credentials)
  • No capability enforcement (they can do anything their token allows)
  • No behavioral monitoring (no one knows if they're misbehaving)
  • No MCP attestation (the tools they connect to are unverified)
  • No audit non-repudiation (you can't prove which agent did what)

This is the identity gap. And closing it requires a new primitive—not a new profile on an existing protocol, not a new scope format, not a new OIDC claim. A purpose-built identity system for autonomous non-human software.

Close the Identity Gap

AIM is open source (Apache-2.0), self-hosted, and provides the identity layer that OAuth and OIDC were never designed to offer. Cryptographic agent identity, runtime capability enforcement, continuous trust scoring, and MCP attestation—in one line of code.