Here’s a workflow I set up that runs every night while I’m asleep:
- A research agent scans a list of RSS feeds and developer blogs, picks three topics worth writing about, and drops a brief into shared memory
- A writer agent wakes up 20 minutes later, reads the brief, and drafts a short article in Markdown
- A publisher agent checks the draft, formats it, and drops it into a folder where my static site generator picks it up
By the time I have my morning coffee, there are three draft articles waiting for my review. I edit them, approve them, and they publish. The total time I spent: maybe ten minutes.
That’s a multi-agent pipeline. Let’s build one.
This tutorial assumes you’ve completed the first agent tutorial.
What Multi-Agent Means in OpenClaw
In OpenClaw, each agent is a separate process with its own config, memory, and LLM connection. Agents communicate with each other through two mechanisms:
- Shared memory directories — Agent A writes a file, Agent B reads it
- ACP (Agent Communication Protocol) — direct message passing between running agents
For most workflows, shared memory is simpler and more reliable. ACP is better for real-time back-and-forth between agents.
The Content Pipeline: Setup
Create Three Agents
openclaw agent create --name researcher
openclaw agent create --name writer
openclaw agent create --name publisher
Create a Shared Memory Directory
mkdir -p ~/.openclaw/shared/pipeline
This is where agents will leave notes for each other.
Configure the Research Agent
nano ~/.openclaw/agents/researcher/config.yaml
name: researcher
llm:
provider: claude
model: claude-sonnet-4-6
apiKey: sk-ant-your-key
heartbeat:
enabled: true
schedule: "0 22 * * *" # 10pm every night
task: |
Check the RSS feeds in your memory/feeds.md file.
Identify 3 topics with high developer interest from the last 24 hours.
For each topic, write a 3-sentence brief covering:
- What it is
- Why developers care about it right now
- A good angle for a ~800 word article
Save the output to /shared/pipeline/research-brief.md with today's date.
memory:
enabled: true
directory: ./memory
shared:
- path: ~/.openclaw/shared/pipeline
access: write
Create a feeds file:
cat > ~/.openclaw/agents/researcher/memory/feeds.md << 'EOF'
# RSS Feeds to Monitor
- https://news.ycombinator.com/rss
- https://dev.to/feed
- https://tldr.tech/api/rss/devops
- https://changelog.com/feed
EOF
Configure the Writer Agent
nano ~/.openclaw/agents/writer/config.yaml
name: writer
llm:
provider: claude
model: claude-sonnet-4-6
apiKey: sk-ant-your-key
heartbeat:
enabled: true
schedule: "30 22 * * *" # 10:30pm — 30 min after researcher
task: |
Read the research brief in /shared/pipeline/research-brief.md.
Pick the most interesting topic from the brief.
Write a full ~800 word article in Markdown format.
Style: conversational, developer audience, concrete examples, no fluff.
Save the draft to /shared/pipeline/draft-YYYY-MM-DD.md using today's date.
Write a note in /shared/pipeline/status.md marking the draft as "ready for review".
memory:
enabled: true
directory: ./memory
shared:
- path: ~/.openclaw/shared/pipeline
access: readwrite
Configure the Publisher Agent
nano ~/.openclaw/agents/publisher/config.yaml
name: publisher
llm:
provider: claude
model: claude-haiku-4-5-20251001 # cheaper model for formatting tasks
apiKey: sk-ant-your-key
heartbeat:
enabled: true
schedule: "0 23 * * *" # 11pm — 30 min after writer
task: |
Check /shared/pipeline/status.md for drafts marked "ready for review".
For each ready draft:
1. Add proper frontmatter (title, description, tags, date) inferred from content
2. Fix any formatting issues
3. Copy the finished file to ~/blog/drafts/
4. Update status.md to mark it as "published to drafts"
memory:
enabled: true
directory: ./memory
shared:
- path: ~/.openclaw/shared/pipeline
access: readwrite
Start All Three Agents
openclaw start --agent researcher &
openclaw start --agent writer &
openclaw start --agent publisher &
Or use the OpenClaw process manager:
openclaw fleet start
The fleet command starts all configured agents and shows a unified TUI with all three running simultaneously.
Understanding ACP: When Agents Talk Directly
Shared memory works for scheduled pipelines. But what if you want agents to have a real conversation?
That’s what ACP (Agent Communication Protocol) is for. ACP lets one running agent send a message directly to another running agent and wait for a response.
A Simple ACP Example
Agent A (coordinator) sends a task to Agent B (specialist):
# In coordinator's task definition
- action: acp_send
target: specialist
message: |
Please analyze this code snippet and identify potential security issues:
{{ memory.latest_code_snippet }}
await_response: true
timeout: 60s
Agent B receives the message, processes it with its LLM, and sends back a response. Agent A continues its workflow with the response included.
ACP Provenance: The Identity Problem
Here’s something that sounds abstract until it bites you.
Imagine you have ten agents running. Agent A (trusted, has access to your database credentials) receives a message claiming to be from Agent B (your analysis agent).
Before v2026.3.8: Agent A had no reliable way to verify the message actually came from Agent B and not from something else claiming to be Agent B. This is the agent impersonation problem.
ACP Provenance (added in v2026.3.8): Every agent now signs its messages cryptographically. When Agent A receives a message claiming to be from Agent B, it can verify the signature before acting on it.
For a simple two-agent setup, this doesn’t matter much. For a production pipeline with agents that have access to sensitive data or can take real-world actions, it matters a lot.
You don’t need to configure anything — provenance is enabled by default in v2026.3.8. Just know it’s there and what it does.
Common Mistakes
Agents running at the same time and writing to the same file
If researcher and writer both run at 10pm and both try to write to the same file, you’ll get a conflict. Stagger your schedules with at least a 15-minute buffer, as we did in the examples above.
Agent B reads before Agent A writes
Same problem from the other direction. Always schedule the upstream agent first and give it enough time to finish. For longer tasks, add a status file check:
# In writer's task:
First check /shared/pipeline/research-brief.md.
If it doesn't exist or hasn't been updated today, stop and try again tomorrow.
Infinite loops in ACP conversations
If Agent A asks Agent B a question, and Agent B responds with a question back, and Agent A interprets that as needing a response… you can end up with agents messaging each other indefinitely.
Prevent this by being explicit in system prompts:
When you respond to ACP messages from other agents, end your response with
"[END]" to signal that you're done and don't need a further response.
What’s Next
Connect more platforms to your agents: OpenClaw Integrations: Connect WhatsApp, Telegram, Slack and More
Related Reading.
OpenClaw Tutorial: Build Your First AI Agent in 15 Minutes
Build your first OpenClaw agent from scratch — connect Telegram, configure a heartbeat schedule, set up memory, and swap LLMs. A complete hands-on walkthrough with real scenarios.
OpenClaw Integrations: Connect WhatsApp, Telegram, Slack and More
Step-by-step guide to connecting OpenClaw to Telegram, WhatsApp, Slack, and Discord — including bot token setup, voice mode config, multi-platform routing, and privacy notes.
Agent Skills with Google Gemini: Function Calling Guide
Complete guide to Gemini function calling — define tools, handle function_call responses, return results, and compare syntax with Claude and OpenAI. Node.js.