What is MCP?

The Model Context Protocol (MCP) is an open standard for connecting AI agents to external tools and data sources. An MCP server exposes a set of tools — database queries, API calls, file operations, web searches — that an AI agent can invoke. The agent discovers available tools, selects the appropriate one, and calls it with parameters derived from the conversation context.

MCP solves the tool integration problem: instead of building custom connectors for each AI platform, tool providers implement the MCP interface once and it works across any MCP-compatible agent. This has led to rapid growth in available MCP servers covering everything from GitHub operations to Slack messaging to database management.

The Governance Gap

MCP provides a clean protocol for tool discovery and invocation, but it does not include an authorization layer. When an MCP server exposes a tool, any connected agent can call it without restriction. There is no built-in mechanism to:

  • Limit which tools a specific agent can use
  • Set cost or rate limits on tool invocations
  • Require human approval for sensitive operations
  • Audit which tools were called and why
  • Provide cryptographic proof of authorization to external services

This means an agent with access to a payment MCP server could initiate unlimited transactions. An agent with database access could execute destructive queries. The tool server trusts the agent completely, which is a reasonable default for development but dangerous in production.

How OpenLeash Adds MCP Authorization

OpenLeash sits between the AI agent and the MCP server as an authorization sidecar. Before any tool call reaches the MCP server, OpenLeash evaluates it against your policies and returns a decision. The integration works through OpenClaw, which provides a hook point for intercepting tool executions.

The architecture is straightforward:

  1. The AI agent requests a tool call through the MCP protocol
  2. OpenClaw intercepts the call before execution
  3. OpenClaw sends an authorization request to OpenLeash with the tool name, parameters, and context
  4. OpenLeash evaluates the request against all applicable policies
  5. If allowed, the tool call proceeds and OpenLeash issues a proof token
  6. If denied or requiring approval, the tool call is blocked and the agent is informed

This is transparent to the AI agent — it continues using the standard MCP protocol. The authorization layer is invisible except when a policy blocks or escalates an action.

Example Policies for MCP Tools

Policies for MCP tool authorization follow the same YAML format as any OpenLeash policy. Here are common patterns:

allow-read-block-write.yaml
# Allow read-only database queries, block writes
name: database-read-only
rules:
- action_type: db_query
decision: ALLOW
conditions:
- expression: "action.payload.operation == 'SELECT'"
- action_type: db_query
decision: DENY
conditions:
- expression: "action.payload.operation in ['INSERT', 'UPDATE', 'DELETE']"
require-approval-for-payments.yaml
# Require human approval for any payment tool call
name: payment-approval
rules:
- action_type: process_payment
decision: REQUIRE_APPROVAL
obligations:
- type: HUMAN_APPROVAL
config:
notify: true
timeout_minutes: 60

Setting Up MCP Authorization

To add OpenLeash authorization to your MCP setup:

  1. Install and start OpenLeash: npx openleash start
  2. Run the setup wizard: npx openleash wizard
  3. Write policies targeting your MCP tool action types
  4. Configure OpenClaw to call OpenLeash before tool execution (see integration guide)
  5. Test with the policy playground before deploying

The entire setup runs locally — no cloud service, no external dependencies. Policies are YAML files you version control alongside your agent code.

Beyond Tool-Level Controls

MCP authorization with OpenLeash goes beyond simple allow/deny per tool. Because policies evaluate the full context of each invocation, you can implement:

  • Parameter-level controls — allow a tool but restrict specific parameter values (e.g., allow email but only to approved domains)
  • Cost-based escalation — auto-approve cheap operations, require approval for expensive ones
  • Trust-based decisions — different policies for different counterparty trust levels
  • Time-based restrictions — allow certain tools only during business hours
  • Audit trail — complete JSONL log of every tool call, decision, and proof token

For a deeper understanding of how policy evaluation works, see What is AI Agent Authorization? or the documentation.