What OpenFGA Is

OpenFGA is a relationship-based authorization (ReBAC) service originally created at Auth0/Okta and now a CNCF Incubating project. It implements the model described in Google's Zanzibar paper: permissions are stored as relationship tuples between users and resources, and authorization decisions are answered by traversing those relationships at query time.

OpenFGA runs as a stateful service with HTTP and gRPC APIs, backed by PostgreSQL, MySQL, or SQLite. Apache-2.0, written in Go. SDKs for Java, Node.js, Go, Python, and .NET. A web playground for modeling, a CLI for tuple management, and a Terraform provider for declarative configuration. Deployed in production at Auth0 and a growing set of SaaS products with sharing semantics.

The ReBAC Model in Practice

ReBAC encodes permissions as a graph. You define an authorization model — the relation types that exist between objects — and then store concrete tuples that say specific users have specific relations to specific resources. A simplified example:

model
  schema 1.1

type user
type folder
  relations
    define owner: [user]
    define editor: [user] or owner
    define viewer: [user] or editor
type document
  relations
    define parent: [folder]
    define viewer: viewer from parent

With that model and tuples like (alice, owner, folder:projects), OpenFGA can answer "can Alice view document:design-doc?" by walking the relationship graph: the document's parent folder is folder:projects, Alice is the owner of that folder, owner implies editor, editor implies viewer. The answer is yes.

That model is excellent for sharing semantics — Google Drive, Notion, Linear, multi-tenant SaaS where users have nested permissions on resources they own, are members of, or have been granted access to. ReBAC handles those questions efficiently and correctly.

Why ReBAC Doesn't Quite Fit AI Agent Governance

AI agent governance asks a different question. The typical agent decision isn't "does the agent have a relation to the resource?" — it's "given this specific action with these attributes, in this context, should it proceed?" The relevant inputs are usually:

  • Cost or quantity of the action (pay $X, send Y emails, allocate Z tokens)
  • Cumulative spend or rate-limit windows (less than $500 in the last 24 hours)
  • Counterparty trust level (LOW / MEDIUM / HIGH based on domain or attestation)
  • Time-of-day or business-hours rules
  • Action type (allowed for purchases, denied for fund transfers, requires approval for legal filings)
  • Policy obligations (require human approval, require step-up auth, require deposit)

You can express some of these as ReBAC relations — agent has the can_purchase_under_100 relation, for example — but each cost threshold becomes a different relation, and the model degrades fast. Constraint evaluation, policy obligations, and contextual decisions aren't what ReBAC is designed for. Trying to bend OpenFGA to do agent governance ends up rebuilding most of OpenLeash inside relationship tuples, and the result is harder to read, harder to maintain, and slower to query.

The reverse is also true: OpenLeash isn't designed for relationship-based access decisions. If your real question is "can this user view this document because they're a member of the project that owns its parent folder," OpenFGA is the right tool. They answer different questions.

What OpenLeash Adds for AI Agents

OpenLeash is designed specifically for autonomous AI agents acting on a person's or organization's behalf:

  • Agent identity — agents register with Ed25519 keypairs and sign every authorization request. OpenLeash verifies the signature, the timestamp, and a single-use nonce before evaluating any policy.
  • Approval workflow — when a policy returns REQUIRE_APPROVAL, OpenLeash creates a pending request that an owner can review in a portal and approve or deny. Approval tokens are single-use, action-scoped, and time-limited.
  • Cryptographic proof tokens — every approved action returns a PASETO v4.public token (Ed25519-signed) that any counterparty can verify offline with the public key. The token binds the decision to the specific action.
  • Append-only audit log — every authorization attempt, approval, and policy change is recorded in a JSONL log scoped per user, organization, and agent.
  • Policy drafts — agents can propose new policies for owner review when they hit an action type the existing rules don't cover.
  • Action-level YAML — policies use constraints (cost <= 100) and obligations (HUMAN_APPROVAL, STEP_UP_AUTH) that map directly onto agent decisions.

When to Use Which

Use OpenFGA when your authorization questions are relationship-shaped — file sharing, project membership, document hierarchies, group-based permissions, multi-tenant access. When users have permissions on resources because of who they are, who they're related to, or what they own, and you need fast queries over that graph.

Use OpenLeash when your authorization questions are action-shaped — agent governance, spending limits, approval thresholds, counterparty trust, time-based rules. When the inputs to the decision are attributes of the action itself, not relationships between actors and resources.

Use both when an AI agent operates inside a system with rich sharing semantics. OpenFGA decides whether the agent — acting as user X — can even see the resource. OpenLeash decides whether the agent should take a specific side-effectful action against it. The two complement each other directly: ReBAC for the access boundary, OpenLeash for what's allowed inside it.

Could You Build OpenLeash on Top of OpenFGA?

OpenFGA is a less natural foundation for agent governance than OPA or Cedar. ReBAC's data model is relationships, not attributes or constraints, so encoding "approve if cost ≤ 100" requires either pre-computing every threshold as a separate relation or embedding constraint logic outside OpenFGA — at which point OpenFGA isn't really making the decision. You'd also need to add agent identity, request signing, the approval state machine, proof token issuance, and audit trail, none of which ReBAC provides.

Pragmatically, the right approach is to use each tool for the layer it's designed for. ReBAC for who can access what; OpenLeash for what agents can do once they're inside.

Key Differences Summary

  • Authorization model — ReBAC (relationship-based) versus action-level constraints + obligations.
  • Question answered — "Does this user have a relationship that grants access to this resource?" versus "Should this agent perform this specific action right now, given the context?"
  • Policy shape — Authorization model (relation types) plus relationship tuples versus YAML constraints (cost <= 100) and obligations (HUMAN_APPROVAL).
  • Identity model — OpenFGA models users as opaque identifiers; the host system handles authentication. OpenLeash registers agents with Ed25519 keypairs and verifies every request signature.
  • Decision output — OpenFGA returns true or false. OpenLeash returns ALLOW / DENY / REQUIRE_APPROVAL / REQUIRE_STEP_UP / REQUIRE_DEPOSIT plus a cryptographic proof token.
  • Verification model — OpenFGA decisions are consumed by the calling service. OpenLeash proof tokens can be verified offline by any third party with the public key.
  • Human approval — OpenFGA has no approval workflow. OpenLeash has a built-in state machine for owner-driven approvals.
  • Best fit — OpenFGA: SaaS sharing semantics, multi-tenant access, document/project hierarchies. OpenLeash: AI agent governance, spending controls, action-level approvals, counterparty trust.
  • Stewardship — OpenFGA is CNCF Incubating, originally from Auth0/Okta. OpenLeash is a single-vendor open-source project (Apache-2.0) focused on AI agent authorization.

Learn More

See how OpenLeash compares to other authorization tools: OpenLeash vs OPA covers the general-purpose policy engine, and OpenLeash vs Cedar covers AWS's embedded policy language. For foundational concepts, read about AI agent authorization, AI agent guardrails, human-in-the-loop controls, and PASETO proof tokens. Or dive into the documentation to start implementing.