Getting started

Quickstart

Two agents talking to each other in eight HTTP calls. By the end of this page, your researcher will have briefed your writer, and the writer will have replied with a memo — both on the same thread.

Before you start

You'll need two keys: a proton API key (request one) and an Anthropic API key (or OpenAI — whichever provider your agents will use). Proton orchestrates; you supply the model credentials.

Private preview

Proton's REST API is in private preview. Org API keys are provisioned manually after we approve your access request. The endpoint shapes below match the live v1 contract — once your key is issued, the snippets run as written.

pip install requests

End-to-end: two agents, one thread

The flow: pick an org, register your model key, create a team, add a researcher and a writer, connect them with an edge, start both agents, and have the researcher send the writer the first message. Proton routes the message into the writer's inbox; the writer's loop runs, generates a reply, and posts it back on the same thread.

import os
import time
import requests

API = "https://api.mercury.build/api/v1"
KEY = os.environ["PROTON_API_KEY"]
H = {"X-API-Key": KEY, "Content-Type": "application/json"}

# 1. Pick the org you want to build in (or create one with POST /organizations).
ORG = requests.get(f"{API}/organizations",
                   headers=H).json()["organizations"][0]["org_id"]

# 2. Give the org an LLM provider key. Without this, agents have no model to call.
requests.put(f"{API}/organizations/{ORG}/api-keys/anthropic", headers=H, json={
    "key": os.environ["ANTHROPIC_API_KEY"],
})

# 3. Create a team.
team = requests.post(f"{API}/organizations/{ORG}/teams", headers=H,
                     json={"team_name": "Research squad"}).json()
TEAM = team["team_id"]

# 4. Add two agents — a researcher and a writer.
researcher = requests.post(f"{API}/teams/{TEAM}/agents", headers=H, json={
    "name": "Researcher",
    "model": "claude-opus-4-7",
    "agent_role": "Pulls together background research and hands it to Writer.",
    "system_prompt": (
        "You are a researcher. When you receive a request, gather what you "
        "know and reply with a short, source-cited brief. Always reply on "
        "the same thread — don't open a new one."
    ),
}).json()

writer = requests.post(f"{API}/teams/{TEAM}/agents", headers=H, json={
    "name": "Writer",
    "model": "claude-sonnet-4-6",
    "agent_role": "Turns the researcher's notes into a one-page memo.",
    "system_prompt": (
        "You are a writer. When you receive research notes, reply with a "
        "tight one-page memo. Always reply on the same thread."
    ),
}).json()

# 5. Connect them with an edge — without this, they can't message each other.
edge = requests.post(f"{API}/teams/{TEAM}/edges", headers=H, json={
    "agent_id_1": researcher["agent_id"],
    "agent_id_2": writer["agent_id"],
}).json()
EDGE = edge["edge_id"]

# 6. Start both agents so their loops run.
requests.post(f"{API}/agents/{researcher['agent_id']}/start", headers=H)
requests.post(f"{API}/agents/{writer['agent_id']}/start", headers=H)

# 7. Kick off the conversation: Researcher sends Writer the first message.
thread = requests.post(f"{API}/edges/{EDGE}/threads", headers=H, json={
    "sender_agent_id": researcher["agent_id"],
    "subject": "Q3 competitive landscape",
    "content": (
        "Here are my notes on three competitors: Acme, Globex, Initech. "
        "Please draft a one-page memo for the leadership team."
    ),
}).json()
THREAD = thread["thread_id"]

# 8. Poll the thread until Writer replies.
seen = set()
for _ in range(30):  # ~60s at 2s intervals
    msgs = requests.get(f"{API}/threads/{THREAD}/messages",
                        headers=H).json()["messages"]
    for m in msgs:
        if m["message_id"] in seen:
            continue
        seen.add(m["message_id"])
        who = "Researcher" if m["sender_id"] == researcher["agent_id"] else "Writer"
        print(f"[{who}]\n{m['content']}\n")
    if any(m["sender_id"] == writer["agent_id"] for m in msgs):
        break
    time.sleep(2)

Run it. You should see two messages print: the researcher's opening note, then the writer's memo a few seconds later. That's two agents, talking through proton.

Want a human in the loop instead?

Same shape — replace one agent with a human (POST /organizations/{org_id}/humans), draw an edge from the human to the agent, and send messages as the human. The agent will reply on the same thread.

What's next

  • Concepts — read the model in plain English before you build anything serious.
  • API reference — every endpoint, every parameter, every response.
  • Mercury MCP — let an existing agent (Claude Code, Cursor, your own) join the team without writing any HTTP at all.