ROAR (Reliable Open Agent Relay) is an open, 5-layer protocol that gives every AI agent a common identity, a discoverable capability manifest, and a structured messaging format — over HTTP and WebSocket you already have.
When you want one AI agent to delegate a task to another today, you write glue code. You invent a request format. You write parsing logic for the response. If you need streaming output, you add WebSocket or SSE and invent those conventions too. Then you repeat the whole process for every new integration.
There is no standard. No common way for an agent to say "here is who I am and what I can do" — and no standard way for a caller to understand it. Every agent-to-agent integration is a custom contract maintained by hand. When an API changes, you find out in production.
ROAR solves this at the protocol level. A ROAR-compatible agent exposes three standard endpoints: a health probe, a machine-readable capability card, and a message handler — all using the same JSON envelope, the same auth pattern, and the same streaming protocol. One integration pattern. Works everywhere.
{roar, from, to, intent, payload} envelopeGET /roar/card reveals full capabilities{"type":"delta"} eventsSix design choices that make ROAR the right foundation for multi-agent systems — whether you're building an orchestrator, a specialist worker, a research pipeline, or all three.
did:roar:agent:name — a Decentralized Identifier that is
unique, portable, and requires no central registry. When your orchestrator receives a message,
it can verify the sender's DID against their published agent card. No more trusting
arbitrary header values like X-Agent-ID: something.
GET /roar/card returns a machine-readable manifest: the agent's DID,
display name, supported capabilities (execute, delegate,
monitor, stream), protocol version, and streaming endpoint.
Any caller can qualify a ROAR agent in two HTTP calls — no documentation required.
/roar/ws
delivers {"type":"delta","delta":"..."} events as tokens arrive, then a
{"type":"done"} signal when complete. An orchestrator can pipe that stream
directly to a UI or another agent — no polling, no buffering hacks.
{roar, from, to, intent, payload} object works identically over
HTTP/REST for request-response tasks and WebSocket for streaming. Your handler code
doesn't change between transports. Future transports (gRPC, NATS) can adopt the
same envelope without breaking existing agents.
roar-protocol package gives you the FastAPI router (mounts all 4 endpoints
automatically), Pydantic message models, DID utilities, WebSocket streaming, and a
RoarClient for sending messages to other agents. You can add full ROAR
support to an existing FastAPI app in under 10 lines.
ROAR is designed in layers — each with a single responsibility. Lower layers don't know about higher ones. This means you can implement partial support (L1–L3 only for a discovery service, L1–L5 for a full agent) and still be interoperable.
Every ROAR agent is identified by a Decentralized Identifier in the
form did:roar:agent:<name>. DIDs are self-sovereign — no central
authority issues them, and no central registry validates them. The agent card at
GET /roar/card is the identity manifest: it binds the DID to a display name,
capability list, and protocol version.
When an agent receives a message, it checks the from.did field.
If it needs to verify the sender, it fetches their card and compares the DID. This is
how trust is established in a decentralized multi-agent system — no shared secret
database, no auth server.
Before sending a task, an orchestrator should check two things: is the agent alive, and can it do what I need? ROAR answers both in two unauthenticated GET requests.
GET /roar/health returns {"status":"ok"} within milliseconds.
GET /roar/card returns the full capability manifest, including the
capabilities array — e.g. ["execute","delegate","stream"].
An orchestrator that checks the card before delegating will never send a streaming task
to an agent that doesn't support streaming.
ROAR doesn't invent a new wire format — it defines how to use HTTP and WebSocket
correctly for agent communication. For request-response: POST /roar/message
with a Authorization: Bearer <token> header and a JSON body.
For streaming: connect via WebSocket to /roar/ws, authenticate, send
the same JSON envelope.
The spec also defines CORS requirements (so browser-based agents can call ROAR endpoints directly), connection timeout handling, and graceful WebSocket reconnection. You don't have to figure any of this out — it's in the spec.
Every ROAR message — in both directions — uses the same JSON envelope:
roar (protocol version), from (sender's DID, display name,
capabilities), to (recipient's DID), intent
(execute | delegate | monitor),
and payload (task content).
Because every message is self-describing, you can log entire conversations as JSON
arrays, replay them for debugging, route them based on intent, or
audit who sent what without any out-of-band metadata. Response envelopes mirror
the same structure with a status and result payload.
WebSocket /roar/ws delivers partial responses as they are generated —
no polling, no waiting for the full response to buffer. The stream uses three event types:
{"type":"delta","delta":"..."} for each token chunk,
{"type":"done"} when the response is complete, and
{"type":"error","message":"..."} for failures.
An agent receiving a stream can pipe it directly into a UI (token-by-token rendering, just like ChatGPT), forward it to another agent as its own stream, or write it to a file. The streaming layer is composable — chains of streaming agents are natively supported by the protocol.
ROAR is a general-purpose protocol. Here are common patterns — each one relies on ROAR's discovery, messaging, and streaming layers working together.
capabilities, picks the best match for each sub-task, sends structured
ROAR messages, and streams results back to the user in real time. No custom
integration code for each worker — they all speak ROAR.
monitor intent watches another agent's task progress
and sends status events upstream. ROAR's streaming endpoint lets it push incremental
status without polling. The observer knows it's a monitoring relationship from the
envelope's intent field.
intent: "delegate". Agent B may further delegate
to Agent C. The ROAR envelope carries context through the entire chain.
Each hop is logged and auditable via the from / to fields.
All five ROAR interactions — health check, agent card fetch, message send, WebSocket stream, and cURL export — pointing at the real ProwlrBot endpoint. Change the base URL to point at any ROAR-compatible agent you're running locally or in production.
GET /roar/health — the liveness probe.
A healthy ROAR agent returns {"status":"ok"} immediately. Use this before sending
a task to verify the endpoint is reachable and the agent process is running.
GET /roar/card — the agent discovery
manifest. No auth required. Returns the agent's DID, display name, capabilities, intents,
protocol version, and streaming endpoint. This is how agents discover each other without
a central registry.
POST /roar/message with a full ROAR
envelope. Requires a Bearer token. The from.did is auto-generated for
this demo session. The agent executes the task and returns a structured ROAR response.
Watch the envelope preview update as you type.
wss://[host]/roar/ws
and sends a ROAR message. The agent generates a response token by token, and you see each
{"type":"delta","delta":"..."} event arrive in the log below as it happens.
This is Layer 5 of the protocol — real-time streaming between agents.
curl commands for all
three ROAR endpoints. Commands update automatically as you change the base URL or token
in the other tabs. Pipe to jq . for pretty-printed output.
Three steps to make any Python agent speak ROAR. After step 3, your agent is discoverable by any other ROAR agent in the world — no registration, no API keys, no configuration files.
roar-protocol package contains everything you need: the FastAPI router,
Pydantic message models (RoarMessage, RoarResponse),
DID generation utilities, WebSocket streaming support, and a RoarClient
for calling other ROAR agents from within your agent's handler.
# Install from PyPI
pip install roar-protocol
poetry add roar-protocol
uv add roar-protocol
AgentCard that describes your agent — this becomes the response
to GET /roar/card. The capabilities list tells callers what
your agent supports. Mounting the RoarRouter automatically wires
/roar/health, /roar/card, /roar/message, and
/roar/ws — all four endpoints in one call.
did:roar:agent:your-unique-name.
Names should be lowercase, hyphen-separated, and globally unique within your
deployment. The DID appears in every outgoing and incoming message envelope.
from fastapi import FastAPI from roar_protocol import RoarRouter, AgentCard app = FastAPI() card = AgentCard( did="did:roar:agent:my-agent", display_name="My Agent", agent_type="agent", capabilities=["execute", "stream"], ) router = RoarRouter(card=card) app.include_router(router, prefix="/roar") # That's it — /roar/health, /roar/card, # /roar/message, and /roar/ws are now live.
import { RoarRouter } from 'roar-protocol'; import express from 'express'; const app = express(); const roar = new RoarRouter({ did: 'did:roar:agent:my-agent', displayName: 'My Agent', capabilities: ['execute', 'stream'], }); app.use('/roar', roar.router());
@router.on_message to handle
POST /roar/message requests. For streaming responses, use
@router.on_stream and yield RoarDelta objects —
the WebSocket layer handles chunked delivery automatically. To call another
ROAR agent from within your handler, use the built-in RoarClient.
msg.auth_token
inside the handler. Validate it against your user store before processing. Return a
401 response (or raise HTTPException(401)) for invalid tokens.
@router.on_message async def handle(msg: RoarMessage) -> RoarResponse: task = msg.payload.get("task", "") result = await my_llm.run(task) return RoarResponse( status="ok", payload={"result": result}, )
@router.on_stream async def handle_stream(msg: RoarMessage): task = msg.payload.get("task", "") async for chunk in my_llm.stream(task): yield RoarDelta(delta=chunk) yield RoarDone() # required — signals stream end
from roar_protocol import RoarClient @router.on_message async def delegate(msg: RoarMessage) -> RoarResponse: # Forward to a specialist agent async with RoarClient("https://specialist.example.com") as c: resp = await c.send( from_did="did:roar:agent:my-agent", task=msg.payload["task"], token="...", ) return RoarResponse(status="ok", payload=resp.payload)
No. ROAR is an open protocol — ProwlrBot is one implementation of it.
You can implement ROAR in any language on any infrastructure. The Python SDK
(pip install roar-protocol) is the reference implementation, but
the spec itself is transport and runtime neutral.
No. ROAR uses HTTP and WebSocket as its transport — it runs on top of your existing API layer. It defines the shape and semantics of agent-to-agent messages, not how you build your own product API. Think of it like a protocol layer (like SMTP for email) rather than an API framework.
ROAR uses standard Bearer token auth: the Authorization: Bearer <token>
header on HTTP requests, and a {"type":"auth","token":"..."} message sent
first over WebSocket before the ROAR envelope. The spec doesn't prescribe what
the token is — it can be a JWT, an API key, or an OAuth token. Your handler validates it.
Yes. The v1.0 spec is stable and in production use by ProwlrBot. Breaking
changes follow semantic versioning — the roar field in every envelope
carries the version string, so agents can negotiate compatibility.
Yes. Run prowlr app (if you have ProwlrBot installed) or mount the
RoarRouter on any local FastAPI app and point the demo above at
http://localhost:8088. For WebSocket streaming, your local endpoint
needs to be accessible from the browser — standard localhost development setup works.
MIT-licensed, published on PyPI, source on GitHub. Works with any FastAPI or async Python 3.9+ project.