Interactive demo  ·  No backend required

The standard for
agent-to-agent communication

ROAR (Reliable Open Agent Relay) is an open, 5-layer protocol that gives every AI agent a common identity, a discoverable capability manifest, and a structured messaging format — over HTTP and WebSocket you already have.


Agents can't talk to each other — yet

When you want one AI agent to delegate a task to another today, you write glue code. You invent a request format. You write parsing logic for the response. If you need streaming output, you add WebSocket or SSE and invent those conventions too. Then you repeat the whole process for every new integration.

There is no standard. No common way for an agent to say "here is who I am and what I can do" — and no standard way for a caller to understand it. Every agent-to-agent integration is a custom contract maintained by hand. When an API changes, you find out in production.

ROAR solves this at the protocol level. A ROAR-compatible agent exposes three standard endpoints: a health probe, a machine-readable capability card, and a message handler — all using the same JSON envelope, the same auth pattern, and the same streaming protocol. One integration pattern. Works everywhere.

Without ROAR
  • Custom JSON schema per agent
  • No standard way to discover capabilities
  • Different auth scheme for every service
  • Bespoke streaming implementation each time
  • No portable agent identity — trust the header
  • Every integration is one-off glue code
With ROAR
  • Standard {roar, from, to, intent, payload} envelope
  • GET /roar/card reveals full capabilities
  • Bearer auth + CORS defined by the spec
  • Built-in WebSocket streaming — {"type":"delta"} events
  • DID-based identity: every agent is verifiable
  • One pattern, any agent, any framework

Built for the agentic era

Six design choices that make ROAR the right foundation for multi-agent systems — whether you're building an orchestrator, a specialist worker, a research pipeline, or all three.

🪪
DID-based Identity
Every agent has a did:roar:agent:name — a Decentralized Identifier that is unique, portable, and requires no central registry. When your orchestrator receives a message, it can verify the sender's DID against their published agent card. No more trusting arbitrary header values like X-Agent-ID: something.
🔍
Agent Discovery
GET /roar/card returns a machine-readable manifest: the agent's DID, display name, supported capabilities (execute, delegate, monitor, stream), protocol version, and streaming endpoint. Any caller can qualify a ROAR agent in two HTTP calls — no documentation required.
Streaming First
Streaming is in the spec from day one, not bolted on later. WebSocket /roar/ws delivers {"type":"delta","delta":"..."} events as tokens arrive, then a {"type":"done"} signal when complete. An orchestrator can pipe that stream directly to a UI or another agent — no polling, no buffering hacks.
🔗
Transport Agnostic
The ROAR message envelope is transport-neutral. The same {roar, from, to, intent, payload} object works identically over HTTP/REST for request-response tasks and WebSocket for streaming. Your handler code doesn't change between transports. Future transports (gRPC, NATS) can adopt the same envelope without breaking existing agents.
🌐
Fully Open Standard
The spec, the Python SDK, the TypeScript reference implementation, and the docs are all MIT-licensed. No vendor lock-in. No SaaS dependency. No usage fees. Any agent framework can implement ROAR in an afternoon, and your existing agents will talk to it immediately.
📦
pip install roar-protocol
The roar-protocol package gives you the FastAPI router (mounts all 4 endpoints automatically), Pydantic message models, DID utilities, WebSocket streaming, and a RoarClient for sending messages to other agents. You can add full ROAR support to an existing FastAPI app in under 10 lines.

5 layers. One protocol.

ROAR is designed in layers — each with a single responsibility. Lower layers don't know about higher ones. This means you can implement partial support (L1–L3 only for a discovery service, L1–L5 for a full agent) and still be interoperable.

L5
Streaming
wss:// delta events
L4
Message Exchange
JSON envelope
L3
Transport
HTTP + WebSocket
L2
Discovery
health + card
L1
Identity
DIDs
L1
Identity
DIDs + agent cards

Every ROAR agent is identified by a Decentralized Identifier in the form did:roar:agent:<name>. DIDs are self-sovereign — no central authority issues them, and no central registry validates them. The agent card at GET /roar/card is the identity manifest: it binds the DID to a display name, capability list, and protocol version.

When an agent receives a message, it checks the from.did field. If it needs to verify the sender, it fetches their card and compares the DID. This is how trust is established in a decentralized multi-agent system — no shared secret database, no auth server.

GET /roar/card
L2
Discovery
Capability probing

Before sending a task, an orchestrator should check two things: is the agent alive, and can it do what I need? ROAR answers both in two unauthenticated GET requests.

GET /roar/health returns {"status":"ok"} within milliseconds. GET /roar/card returns the full capability manifest, including the capabilities array — e.g. ["execute","delegate","stream"]. An orchestrator that checks the card before delegating will never send a streaming task to an agent that doesn't support streaming.

GET /roar/health   GET /roar/card
L3
Transport
HTTP + WebSocket

ROAR doesn't invent a new wire format — it defines how to use HTTP and WebSocket correctly for agent communication. For request-response: POST /roar/message with a Authorization: Bearer <token> header and a JSON body. For streaming: connect via WebSocket to /roar/ws, authenticate, send the same JSON envelope.

The spec also defines CORS requirements (so browser-based agents can call ROAR endpoints directly), connection timeout handling, and graceful WebSocket reconnection. You don't have to figure any of this out — it's in the spec.

POST /roar/message   WS /roar/ws
L4
Message Exchange
Structured envelopes

Every ROAR message — in both directions — uses the same JSON envelope: roar (protocol version), from (sender's DID, display name, capabilities), to (recipient's DID), intent (execute | delegate | monitor), and payload (task content).

Because every message is self-describing, you can log entire conversations as JSON arrays, replay them for debugging, route them based on intent, or audit who sent what without any out-of-band metadata. Response envelopes mirror the same structure with a status and result payload.

POST /roar/message
L5
Streaming
Real-time token delivery

WebSocket /roar/ws delivers partial responses as they are generated — no polling, no waiting for the full response to buffer. The stream uses three event types: {"type":"delta","delta":"..."} for each token chunk, {"type":"done"} when the response is complete, and {"type":"error","message":"..."} for failures.

An agent receiving a stream can pipe it directly into a UI (token-by-token rendering, just like ChatGPT), forward it to another agent as its own stream, or write it to a file. The streaming layer is composable — chains of streaming agents are natively supported by the protocol.

WS /roar/ws

What you can build with ROAR

ROAR is a general-purpose protocol. Here are common patterns — each one relies on ROAR's discovery, messaging, and streaming layers working together.

🎯
Orchestration
A coordinator agent fetches cards from available workers, checks their capabilities, picks the best match for each sub-task, sends structured ROAR messages, and streams results back to the user in real time. No custom integration code for each worker — they all speak ROAR.
🔬
Research pipelines
A research orchestrator queries a knowledge-retrieval agent, a summarizer, and a formatter in sequence. Each one receives a standard ROAR envelope with context from the previous step. Results chain together without custom parsing at each junction.
👁️
Monitoring agents
An agent with monitor intent watches another agent's task progress and sends status events upstream. ROAR's streaming endpoint lets it push incremental status without polling. The observer knows it's a monitoring relationship from the envelope's intent field.
⛓️
Delegation chains
Agent A receives a task it can't fully handle, checks its card, and delegates to Agent B using intent: "delegate". Agent B may further delegate to Agent C. The ROAR envelope carries context through the entire chain. Each hop is logged and auditable via the from / to fields.
🤖
Human-in-the-loop
A human-facing agent (like ProwlrBot) acts as a ROAR gateway: it receives tasks from users over chat, translates them into ROAR envelopes, sends them to specialist agents, streams the responses back to the user, and maintains conversation history — all without the specialist agents needing to know anything about the chat interface.
🔌
Third-party integrations
Wrap any third-party AI API (OpenAI, Anthropic, Gemini, local Ollama) in a thin ROAR adapter. It exposes a standard card and message endpoint. Now any orchestrator that speaks ROAR can call it without knowing which model is underneath.

Try it live

All five ROAR interactions — health check, agent card fetch, message send, WebSocket stream, and cURL export — pointing at the real ProwlrBot endpoint. Change the base URL to point at any ROAR-compatible agent you're running locally or in production.

Base URL
What this does: calls GET /roar/health — the liveness probe. A healthy ROAR agent returns {"status":"ok"} immediately. Use this before sending a task to verify the endpoint is reachable and the agent process is running.
What this does: calls GET /roar/card — the agent discovery manifest. No auth required. Returns the agent's DID, display name, capabilities, intents, protocol version, and streaming endpoint. This is how agents discover each other without a central registry.
What this does: calls POST /roar/message with a full ROAR envelope. Requires a Bearer token. The from.did is auto-generated for this demo session. The agent executes the task and returns a structured ROAR response. Watch the envelope preview update as you type.
Bearer Token
Task
Intent — what kind of action are you requesting?
What this does: connects via WebSocket to wss://[host]/roar/ws and sends a ROAR message. The agent generates a response token by token, and you see each {"type":"delta","delta":"..."} event arrive in the log below as it happens. This is Layer 5 of the protocol — real-time streaming between agents.
Bearer Token
Task
What this does: generates copy-ready curl commands for all three ROAR endpoints. Commands update automatically as you change the base URL or token in the other tabs. Pipe to jq . for pretty-printed output.

Add ROAR to your agent

Three steps to make any Python agent speak ROAR. After step 3, your agent is discoverable by any other ROAR agent in the world — no registration, no API keys, no configuration files.

1
Install the package
The roar-protocol package contains everything you need: the FastAPI router, Pydantic message models (RoarMessage, RoarResponse), DID generation utilities, WebSocket streaming support, and a RoarClient for calling other ROAR agents from within your agent's handler.
Compatibility: Python 3.9+, FastAPI 0.95+, Pydantic v2. Works alongside any existing FastAPI routes — the router mounts at a path you choose.
# Install from PyPI
pip install roar-protocol
poetry add roar-protocol
uv add roar-protocol
2
Mount the ROAR router
Create an AgentCard that describes your agent — this becomes the response to GET /roar/card. The capabilities list tells callers what your agent supports. Mounting the RoarRouter automatically wires /roar/health, /roar/card, /roar/message, and /roar/ws — all four endpoints in one call.
DID naming: use did:roar:agent:your-unique-name. Names should be lowercase, hyphen-separated, and globally unique within your deployment. The DID appears in every outgoing and incoming message envelope.
from fastapi import FastAPI
from roar_protocol import RoarRouter, AgentCard

app = FastAPI()

card = AgentCard(
    did="did:roar:agent:my-agent",
    display_name="My Agent",
    agent_type="agent",
    capabilities=["execute", "stream"],
)

router = RoarRouter(card=card)
app.include_router(router, prefix="/roar")

# That's it — /roar/health, /roar/card,
# /roar/message, and /roar/ws are now live.
import { RoarRouter } from 'roar-protocol';
import express from 'express';

const app = express();
const roar = new RoarRouter({
    did: 'did:roar:agent:my-agent',
    displayName: 'My Agent',
    capabilities: ['execute', 'stream'],
});
app.use('/roar', roar.router());
3
Handle messages and stream responses
Decorate a handler function with @router.on_message to handle POST /roar/message requests. For streaming responses, use @router.on_stream and yield RoarDelta objects — the WebSocket layer handles chunked delivery automatically. To call another ROAR agent from within your handler, use the built-in RoarClient.
Auth: the Bearer token is available as msg.auth_token inside the handler. Validate it against your user store before processing. Return a 401 response (or raise HTTPException(401)) for invalid tokens.
@router.on_message
async def handle(msg: RoarMessage) -> RoarResponse:
    task = msg.payload.get("task", "")
    result = await my_llm.run(task)
    return RoarResponse(
        status="ok",
        payload={"result": result},
    )
@router.on_stream
async def handle_stream(msg: RoarMessage):
    task = msg.payload.get("task", "")
    async for chunk in my_llm.stream(task):
        yield RoarDelta(delta=chunk)
    yield RoarDone()  # required — signals stream end
from roar_protocol import RoarClient

@router.on_message
async def delegate(msg: RoarMessage) -> RoarResponse:
    # Forward to a specialist agent
    async with RoarClient("https://specialist.example.com") as c:
        resp = await c.send(
            from_did="did:roar:agent:my-agent",
            task=msg.payload["task"],
            token="...",
        )
    return RoarResponse(status="ok", payload=resp.payload)

Common questions

Do I need ProwlrBot to use ROAR?

No. ROAR is an open protocol — ProwlrBot is one implementation of it. You can implement ROAR in any language on any infrastructure. The Python SDK (pip install roar-protocol) is the reference implementation, but the spec itself is transport and runtime neutral.

Does ROAR replace REST APIs or GraphQL?

No. ROAR uses HTTP and WebSocket as its transport — it runs on top of your existing API layer. It defines the shape and semantics of agent-to-agent messages, not how you build your own product API. Think of it like a protocol layer (like SMTP for email) rather than an API framework.

How does authentication work?

ROAR uses standard Bearer token auth: the Authorization: Bearer <token> header on HTTP requests, and a {"type":"auth","token":"..."} message sent first over WebSocket before the ROAR envelope. The spec doesn't prescribe what the token is — it can be a JWT, an API key, or an OAuth token. Your handler validates it.

Is v1.0 stable for production use?

Yes. The v1.0 spec is stable and in production use by ProwlrBot. Breaking changes follow semantic versioning — the roar field in every envelope carries the version string, so agents can negotiate compatibility.

Can I run ROAR locally for development?

Yes. Run prowlr app (if you have ProwlrBot installed) or mount the RoarRouter on any local FastAPI app and point the demo above at http://localhost:8088. For WebSocket streaming, your local endpoint needs to be accessible from the browser — standard localhost development setup works.


Available now

MIT-licensed, published on PyPI, source on GitHub. Works with any FastAPI or async Python 3.9+ project.

ROAR Protocol SDK

pip install roar-protocol

ProwlrBot (ROAR built-in)

pip install prowlrbot

Explore further