The Rise of Autonomous AI Agents

AI agents are no longer limited to answering questions. Modern agents book appointments, make purchases, send emails, file documents, call APIs, and interact with external services on behalf of their owners. As agents gain access to more tools and higher-stakes actions, the question shifts from "can the agent do this?" to "should the agent do this?"

Traditional access control systems — role-based (RBAC), attribute-based (ABAC), or simple API key scoping — were designed for human users operating within well-defined application boundaries. They grant broad permissions upfront and assume the user exercises judgment about when to use them. AI agents don't have that judgment. They execute tool calls based on model outputs, which means every action needs independent evaluation.

Why AI Agents Need Authorization

Without authorization controls, an AI agent with access to a payment API could spend unlimited funds. An agent with file system access could overwrite critical data. An agent with email access could send messages to anyone. The risk is not that agents are malicious — it's that they lack the contextual judgment to know when an action is appropriate.

AI agent authorization addresses this by evaluating every action against a set of policies before it executes. The evaluation considers:

  • What the agent wants to do (action type, target resource)
  • How much it costs or what the risk level is
  • Who the counterparty is and their trust level
  • When and where the action is being attempted
  • Whether the owner has pre-approved this category of action

The result is a decision: allow the action, deny it, require human approval, or require additional authentication (step-up).

The Policy-Based Approach

Policy-based authorization expresses rules as declarative documents rather than hardcoded logic. In OpenLeash, policies are YAML files that define conditions, constraints, and obligations for specific action types.

A policy might say: "Allow purchases under $50 from trusted vendors. Require approval for purchases between $50 and $500. Deny purchases over $500." Another policy could say: "Allow appointment bookings during business hours. Require step-up authentication for medical appointments."

Policies compose naturally. Multiple policies can apply to the same action, and the authorization engine evaluates all of them to produce a final decision. This makes it easy to layer organization-wide rules with agent-specific constraints without modifying application code.

Cryptographic Proof Tokens

Authorization decisions are only useful if they can be verified. When OpenLeash allows an action, it issues a PASETO v4.public proof token — a cryptographically signed document that contains the action details, the decision, and a timestamp. This token serves as proof that the agent was authorized by its owner at the time of the action.

Counterparties (the services receiving the agent's requests) can verify the token independently using the owner's public key. No callback to OpenLeash is required. This offline verification model means proof tokens work even when the authorization server is unreachable, and they don't create a centralized dependency.

PASETO (Platform-Agnostic Security Tokens) was chosen over JWT for its stronger security defaults: no algorithm negotiation, no "none" algorithm vulnerability, and mandatory authenticated encryption for local tokens.

Human-in-the-Loop Controls

Not every action should be automated. High-stakes decisions — large purchases, regulated filings, sensitive communications — benefit from human oversight. OpenLeash supports this through approval workflows where the agent pauses execution and waits for explicit owner authorization.

The approval flow works as follows: the agent requests authorization, the policy evaluates to REQUIRE_APPROVAL, OpenLeash creates an approval request, the owner is notified (via the web portal, webhooks, or polling), and the owner approves or denies. If approved, OpenLeash issues a time-limited approval token that the agent uses to complete the action.

This pattern keeps humans in control of high-risk decisions while allowing routine actions to proceed automatically. The threshold between "auto-approve" and "require approval" is fully configurable through policies.

How OpenLeash Implements It

OpenLeash is a local-first authorization sidecar — it runs alongside your agent as a lightweight HTTP server. The agent calls POST /v1/authorize before performing any risky action. OpenLeash evaluates the request against all applicable policies and returns a decision with an optional proof token.

Key implementation details:

  • File-based state — policies, agents, owners, and keys are stored as files. No database required.
  • Ed25519 request signing — agents authenticate requests using Ed25519 keypairs, ensuring only registered agents can request authorization.
  • Deterministic evaluation — same input always produces the same output. No probabilistic logic, no model calls.
  • Append-only audit log — every decision is recorded in a JSONL log for compliance and debugging.
  • Multi-language SDKs — TypeScript, Python, and Go SDKs handle signing, authorization, and proof verification.

The Authorization Lifecycle

A typical authorization flow follows these steps:

  1. Agent registration — the agent receives an invite, registers with an Ed25519 public key, and is bound to an owner.
  2. Policy definition — the owner writes YAML policies that define allowed actions, constraints, and escalation rules.
  3. Authorization request — the agent sends an action request to OpenLeash before executing.
  4. Policy evaluation — OpenLeash evaluates the request against all matching policies.
  5. Decision + proof — the engine returns a decision (ALLOW, DENY, REQUIRE_APPROVAL, REQUIRE_STEP_UP, or REQUIRE_DEPOSIT) and, if allowed, a signed proof token.
  6. Action execution — the agent proceeds (or waits for approval) and presents the proof token to the counterparty.
  7. Counterparty verification — the receiving service verifies the proof token offline using the public key.

When to Use AI Agent Authorization

AI agent authorization is valuable whenever an agent performs actions with real-world consequences: spending money, accessing sensitive data, communicating with external parties, or modifying system state. If you're building an agent that interacts with the world beyond simple text generation, authorization controls help ensure it acts within boundaries you define.

Explore specific scenarios on the use cases page, or dive into the documentation to start implementing.