When an AI agent does something it shouldn't — approves a refund it wasn't authorised to approve, sends a message to the wrong recipient, makes a decision that affects a customer's rights — the first question an auditor asks is: who authorised this, and can you prove it? Most businesses deploying AI agents today cannot answer that question. Not because they weren't careful. Because the infrastructure they're using was never designed to answer it.
API keys, OAuth tokens, shared service accounts — these are the identity mechanisms most AI agents run on. They were built for a different era of software: predictable, deterministic systems that made the same call every time, in a controlled sequence, with a known scope. AI agents are none of those things. They reason contextually, delegate to other agents, spawn sub-tasks, and make decisions based on runtime judgement. The identity infrastructure hasn't caught up — and under the EU AI Act, UK GDPR, and emerging enforcement across the US and Canada, that gap is becoming a liability.
The assumption that breaks
OAuth — the dominant identity standard for APIs — is built on a single assumption: the client's request embodies the resource owner's intent. When a human clicks "Connect with Google," OAuth captures that intent in a token. The token is the proof.
That assumption doesn't hold for agents. An agent booking travel on your behalf might spawn a dozen parallel API calls across different services in the time it takes you to blink. Each call looks like it came from your application. None of them individually look unusual. But taken together, they might represent a chain of decisions the agent made autonomously — decisions that weren't explicitly authorised by you, just inferred from your original prompt.
OAuth has no mechanism to capture that delegation chain. It can tell you that your application made a call. It cannot tell you why, under whose authority, or whether the agent that made the call was acting within its sanctioned scope.
Why shared keys make this worse
Most businesses running AI agents use a shared API key — one key for the whole application, passed to every agent that needs to call an external service. It works fine until something goes wrong. Then you have a log entry that says "key XYZ-4891 made this call at 14:32." It doesn't say which agent. It doesn't say what it was trying to accomplish. It doesn't say whether it was operating within policy.
Shared keys also violate the principle of least privilege at scale. If one agent in your system is compromised, every agent shares its blast radius. And because the key has no identity of its own — it's borrowed from the application — there's no way to revoke access for one agent without affecting all of them.
Under Article 9 of the EU AI Act, high-risk AI systems must maintain logs that allow post-hoc traceability of decisions. Under UK GDPR Article 22, automated decisions affecting individuals must be explainable and contestable. A shared API key log satisfies neither.
The compliance question you can't currently answer
Here is the question regulators, auditors, and claimants will ask:
"This decision was made by an AI agent. Can you prove which agent made it, what authority it was operating under, that the authority hadn't been exceeded, and that the record of what happened hasn't been altered?"
This is not a hypothetical. EU AI Act enforcement begins in August 2026. The ICO's AI Auditing Framework is already operational. Colorado's AI Act has been in force since February. Each of these frameworks assumes you can produce a traceable, verifiable record of what your AI systems did and why. If your agents run on shared keys and OAuth tokens, that record does not exist in a form that satisfies any of them.
What proper agent identity looks like
The answer isn't a new compliance checkbox. It's a different architecture — one where identity and accountability are built into how agents operate, not bolted on as logging after the fact.
The emerging standard for agent identity uses Decentralised Identifiers (DIDs) — W3C-standardised cryptographic identifiers. Each agent gets its own key pair. When an agent takes an action, it signs it with its private key. That signature is verifiable by anyone who holds the agent's public key, without calling back to any central server, without trusting any intermediary, and — critically — without the signature being forgeable after the fact.
Paired with Verifiable Credentials, this model gives you something genuinely useful for compliance: a tamper-proof, cryptographically-signed receipt for every decision an agent makes. Not a log entry that could be altered. A credential that proves: this agent, with this identity, under this authority, took this action, at this time. Verifiable offline. Independently auditable years later, with no cooperation required from the organisation that issued it.
Policy enforcement that isn't just prompts
There's a second problem with current agent governance that is less discussed but equally important: most "policy" for AI agents lives in the system prompt. "Only access data you're authorised to access." "Never share customer information with third parties." These are instructions, not constraints. An agent can follow them until it decides — or is tricked into deciding — that the situation warrants an exception.
Cryptographic policy enforcement works differently. Rather than telling an agent what it's allowed to do, the infrastructure enforces it at the call level. An agent tagged "customer-service" physically cannot call an endpoint tagged "finance" — not because it was told not to, but because the credential it holds doesn't satisfy the policy gate. That enforcement is verifiable. It cannot be reasoned around. It produces an audit trail regardless of what the agent was instructed to do.
This is the distinction between governance by instruction and governance by architecture. The EU AI Act, in requiring "appropriate human oversight measures" and "technical robustness," is pointing at the latter — even if most businesses haven't yet realised that their current setup doesn't qualify.
What this means for your AI compliance posture
If you are deploying AI agents — in customer service, in document processing, in any workflow where an agent takes actions on behalf of a person or business — your compliance posture needs to account for agent identity, not just data handling.
Specifically, you need to be able to demonstrate:
- Each agent has a distinct identity — not a shared key or borrowed application credential.
- Every action produces a verifiable, tamper-proof record — not a log that could be edited or silently omitted.
- Authority is delegated explicitly and attenuates with each hop — a sub-agent cannot exceed the permissions of the agent that called it.
- Policy enforcement is architectural, not instructional — constraints are enforced by the infrastructure, not by hoping the prompt holds.
None of this requires rebuilding your entire AI stack overnight. But it does require acknowledging that shared API keys and OAuth tokens are not an adequate foundation for production AI agents operating in regulated contexts — and that the gap between where most businesses are today and where regulators expect them to be is closing faster than most people realise.
The practical starting point
The most immediate step is an audit of what your agents are currently authorised to do, how that authority is recorded, and what evidence exists if a decision is challenged. For most businesses, the answer to the third question is "not much" — and that's the finding that needs to land with whoever is responsible for compliance, before enforcement makes it land somewhere less comfortable.
AI compliance isn't only about data protection and transparency disclosures. It's about being able to account for what your AI systems did. The infrastructure question — how identity, authority, and auditability are built into your agent architecture — is where that accountability either holds or doesn't.
Ops Intel helps businesses understand and close the gap between their current AI deployment and what compliance frameworks actually require. If you're not sure whether your agent infrastructure would satisfy an audit, that's the right starting point.
Review your AI compliance posture →