14 minute read

TL;DR

OpenClaw is a self-hosted AI gateway that connects 20+ messaging platforms — WhatsApp, Slack, Telegram, Discord, iMessage, Signal — to any LLM provider through a single local daemon. Its real innovation is SOUL.md, a plain markdown file that defines agent personality without code or fine-tuning. With 350,000+ GitHub stars and 300+ contributors, it is one of the fastest-growing open-source AI projects of 2026. But growth is not maturity. The terminal agent architecture it provides is powerful and dangerous in equal measure.

A central network hub with glowing fiber optic cables radiating outward to multiple devices, representing OpenClaw as a gateway between messaging platforms and AI


Everyone calls OpenClaw a coding agent. They are wrong.

OpenClaw does not compete with Cursor, Aider, or Claude Code. Those are code-editing tools with IDE integration and diff-aware workflows. OpenClaw is something different: infrastructure. It is a gateway daemon that sits between the messaging apps you already use and the AI models you want to talk to. The distinction matters because it changes what you should expect from it, what you should worry about, and whether it belongs in your stack at all.

What problem does OpenClaw actually solve?

OpenClaw solves the AI access fragmentation problem. The average developer in 2026 uses three or more AI interfaces daily: ChatGPT for general questions, Claude for coding, Gemini for research, plus Slack bots and Discord integrations. Each has its own context, its own conversation history, its own interface. None of them talk to each other. According to the JetBrains 2025 Developer Ecosystem survey (n=24,534), 85% of developers use at least one AI tool, and 62% use an AI coding assistant, agent, or code editor. But each tool is its own silo, and switching between them burns time.

OpenClaw collapses these into one layer. A single daemon process on your machine connects to WhatsApp, Telegram, Slack, Discord, iMessage, Signal, Matrix, Google Chat, Microsoft Teams, and a dozen more. You message your AI agent through the apps you already have open. The agent routes to whichever LLM provider you configure — Anthropic, OpenAI, Google, or a local model through Ollama. One conversation thread in Telegram can use GPT-5. Another in Slack can use Gemini. The gateway handles the routing. (One caveat: as of April 2026, Anthropic has restricted Claude model access for OpenClaw, so Claude availability depends on how that situation resolves.)

This is not a theoretical benefit. Context-switching between AI tools is friction that compounds. OpenClaw eliminates it by making AI accessible where you already are.

How does the gateway architecture work?

OpenClaw runs as a single daemon process (via launchd on macOS, systemd on Linux) that manages all channels, sessions, agents, and tools from one process. The architecture has four layers.

graph TD
    subgraph "Messaging Channels"
        WA[WhatsApp]
        TG[Telegram]
        SL[Slack]
        DC[Discord]
        iM[iMessage]
        SG[Signal]
        MX[Matrix]
        MORE[12+ more]
    end

    subgraph "OpenClaw Gateway"
        ADAPTER[Channel Adapters]
        ROUTER[Session Router]
        AGENT[Agent System]
        TOOLS[Tool Runtime]
    end

    subgraph "Model Providers"
        AN[Anthropic]
        OA[OpenAI]
        GO[Google]
        OL[Ollama / Local]
    end

    WA --> ADAPTER
    TG --> ADAPTER
    SL --> ADAPTER
    DC --> ADAPTER
    iM --> ADAPTER
    SG --> ADAPTER
    MX --> ADAPTER
    MORE --> ADAPTER

    ADAPTER --> ROUTER
    ROUTER --> AGENT
    AGENT --> TOOLS
    AGENT --> AN
    AGENT --> OA
    AGENT --> GO
    AGENT --> OL

Channel adapters normalize incoming messages from each platform into a common format. A WhatsApp message and a Slack message arrive at the router as the same data structure. This is where the 20+ platform count comes from — each adapter is a plugin that translates one platform’s API into OpenClaw’s internal protocol.

The session router maps incoming messages to agents. Each conversation thread gets its own session with isolated state. You can run multiple agents simultaneously — a coding assistant in one Slack channel, a research agent in Telegram, a personal scheduler in WhatsApp. All through the same gateway.

The agent system loads agent configuration (including SOUL.md), manages context windows, and routes inference requests to the configured model provider. Agents have workspace directories on disk where they read and write files, execute tools, and maintain state between sessions.

The tool runtime gives agents access to the host machine. In the primary session, this means full filesystem and shell access. In secondary sessions, tools can be sandboxed inside Docker containers with restricted permissions.

Installation takes about five minutes:

npm install -g openclaw@latest
openclaw onboard --install-daemon
openclaw dashboard

The minimal configuration points to a model provider:

{
  "agent": {
    "model": "openai/gpt-5.3"
  }
}

Everything else — channel connections, agent personality, tool permissions — is configured through the web dashboard at http://127.0.0.1:18789/ or directly in ~/.openclaw/openclaw.json.

What is SOUL.md and why does it matter?

SOUL.md is a plain markdown file that defines an agent’s persistent identity: personality, values, communication style, and behavioral boundaries. It is injected into the system prompt at the start of every session. This is the concept that separates OpenClaw from every other AI gateway.

Most AI integrations define agent behavior in code. You write Python or TypeScript that constructs system prompts, manages tool permissions, and encodes personality through programmatic logic. SOUL.md replaces all of that with a single file you can edit in any text editor.

The file has four sections:

Section Purpose Example
Core Truths Foundational principles, problem-solving philosophy “Prefer direct answers over hedging”
Boundaries Hard limits on privacy, consent, safety “Never access files outside ~/workspace”
Vibe Conversational tone, personality, style “Terse, technical, no emoji”
Continuity Session persistence, memory management “Summarize key decisions at session end”

The power of this approach is portability. Copy a SOUL.md file to another OpenClaw instance and you get an identical agent. Version-control it in Git and you have a changelog of personality evolution. Share it with a team and everyone gets the same agent behavior. This is agent identity as configuration, not code.

The SOUL.md framework extends beyond OpenClaw. The specification (originally created by aaronjmars on GitHub) includes companion files: STYLE.md for voice patterns, SKILL.md for operating modes, and MEMORY.md for session continuity. Together, they define a complete agent persona in portable, human-readable markdown. Claude Code’s CLAUDE.md and similar configuration files share DNA with this approach — the idea that agent behavior should be declarative and auditable.

How does the security model hold up?

This is where honest assessment matters more than marketing. OpenClaw’s primary session has full host access: filesystem read/write, shell execution, network access. This is the same access model as terminal agents, and it carries the same risks.

For secondary sessions (non-main agents), OpenClaw supports Docker sandboxing:

  • Namespace isolation: separate filesystem and process space per agent
  • Restricted filesystem: workspace-only read/write access
  • Network control: sandbox containers default to no network, overridable in config
  • Recommended hardening: --cap-drop=ALL, --read-only, --user nobody

The pairing system adds a human-in-the-loop gate. Unknown senders on any channel receive a pairing code that requires explicit approval before they can interact with agents. Group chats require @mentions to trigger agent responses, preventing accidental activation.

But Docker is not a security boundary against sophisticated attacks. The OWASP Top 10 for Agentic Applications (December 2025) ranks prompt injection as the #1 risk for AI agent systems. When your agent has shell access and receives messages from 20+ channels, each channel is an attack surface. A crafted message in a Slack channel could, in theory, instruct the agent to execute commands on the host.

OpenClaw mitigates this through sandboxing and allowlisting, but the fundamental trade-off is real: autonomy and security pull in opposite directions. The more tools you give an agent, the more useful and more dangerous it becomes. For personal use on hardware you control, the risk profile is manageable. For shared or enterprise environments, the security model needs significant hardening beyond defaults.

Deployment Risk Level Recommended Config
Personal workstation Moderate Default + pairing enabled
Shared team server High Docker sandbox + allowlist + no-network
Enterprise / production Very High Not recommended without additional security layers

For a deeper look at the attack surface of agents with tool access, see Prompt injection defense.

Where does OpenClaw fit against other AI agent tools?

OpenClaw occupies a category that did not exist a year ago: the personal AI gateway. It is not competing with the tools most people compare it to.

Tool Category Primary Use Self-Hosted
OpenClaw AI gateway Multi-channel agent access Yes (required)
Claude Code Coding agent Terminal-based code editing No (Anthropic-hosted)
Cursor IDE agent Code editing with diff UI No (cloud-dependent)
Aider Coding agent Git-aware code editing Partial (local + API)
OpenHands Coding agent Browser + terminal automation Yes
CrewAI Orchestration Multi-agent task coordination Yes

The multi-agent SDK wars are about orchestration paradigms, how agents coordinate and hand off work. OpenClaw is upstream of all of that. It is the access layer. You could run a CrewAI workflow behind OpenClaw, triggered by a Telegram message, and route the results back through the same channel.

This architectural position is both OpenClaw’s strength and its challenge. As infrastructure, it is model-agnostic, channel-agnostic, and framework-agnostic. But infrastructure without killer apps is just plumbing. The ecosystem needs more purpose-built agents and workflows on top of the gateway to justify the setup overhead for non-technical users.

The GitHub numbers tell a growth story: 350,000+ stars, 300+ contributors, and 5,000+ open issues. Those open issues are worth watching. They signal a project growing faster than its maintainers can keep pace with, which is common in fast-scaling open source but affects reliability. The project transitioned to foundation governance after creator Peter Steinberger joined OpenAI in February 2026, adding a leadership question mark to the roadmap.

When should you actually deploy OpenClaw?

OpenClaw makes sense in three scenarios, and fails in two.

Deploy when:

  1. You want unified AI access across channels. If you already use WhatsApp, Telegram, and Slack daily and want AI available in all of them without switching apps, OpenClaw is the only open-source tool that consolidates this.

  2. You want data sovereignty. Everything runs on your hardware. No conversation data leaves your machine unless you configure a cloud model provider. Pair OpenClaw with Ollama and local models, and you have a fully air-gapped AI assistant accessible through your phone.

  3. You want to experiment with agent personality. SOUL.md makes it trivially easy to iterate on agent behavior. Edit a markdown file, restart the daemon, and your agent’s personality updates. This is a research playground for anyone interested in agent identity and behavioral alignment.

Skip when:

  1. You need a coding agent. OpenClaw’s shell access can execute code, but it has no IDE integration, no diff awareness, no git-aware workflows. For code editing, Aider, Claude Code, or Cursor are purpose-built and more reliable.

  2. You need enterprise security guarantees. The current sandboxing model is a reasonable starting point for personal use, not a production security boundary. If you need audit trails, role-based access control, or compliance certifications, OpenClaw is not there yet.

Getting started with a practical example

Here is a concrete setup: a personal research agent accessible through Telegram and Slack, using GPT-5.3 for inference.

# Install
npm install -g openclaw@latest
openclaw onboard --install-daemon

# Configure model
cat > ~/.openclaw/openclaw.json << 'EOF'
{
  "agent": {
    "model": "openai/gpt-5.3"
  }
}
EOF

Create a SOUL.md for a research-focused agent:

# Core Truths
- You are a research assistant. Your job is finding primary sources, not opinions.
- Cite every claim. If you cannot find the source, say so explicitly.
- Prefer peer-reviewed papers and official documentation over blog posts.

# Boundaries
- Never fabricate citations or statistics.
- Never access files outside ~/research-workspace.
- Ask before executing any shell command that modifies files.

# Vibe
- Direct and concise. No preamble, no trailing summaries.
- Use bullet points for multi-part answers.
- When uncertain, quantify the uncertainty.

# Continuity
- At session end, write a one-paragraph summary to ~/research-workspace/session-log.md.
- Track open questions across sessions.

Connect channels through the dashboard at http://127.0.0.1:18789/, pair your Telegram and Slack accounts, and you have a research agent accessible from your phone and desktop through apps you already use.

FAQ

What is OpenClaw? OpenClaw is an open-source, self-hosted AI assistant gateway written in TypeScript and Node.js. It connects 20+ messaging platforms — WhatsApp, Slack, Telegram, Discord, iMessage, Signal, Matrix — to any LLM provider (Anthropic, OpenAI, Google, Ollama) through a single daemon process running on your hardware. It is MIT-licensed with 350,000+ GitHub stars.

How is OpenClaw different from ChatGPT or Claude? ChatGPT and Claude are AI models accessed through their own interfaces. OpenClaw is infrastructure that sits between your existing messaging apps and any AI model. You message your AI agent through WhatsApp, Slack, or Telegram — the same apps you already use — instead of switching to a separate AI interface. OpenClaw routes messages, manages sessions, and enforces agent personality through SOUL.md.

What is SOUL.md? SOUL.md is a plain markdown file that defines an AI agent’s persistent identity, personality, values, and behavioral boundaries. It is injected into the system prompt at the start of every session. Unlike fine-tuning or prompt engineering in code, SOUL.md is version-controllable, portable across servers, and editable with any text editor. Copy a SOUL.md file to another OpenClaw instance and you get an identical agent.

Is OpenClaw secure enough for production use? OpenClaw’s primary session has full host access (filesystem, shell, network). Secondary sessions can be sandboxed via Docker with namespace isolation, restricted filesystem access, and optional network blocking. For personal use on trusted hardware, this is reasonable. For shared or enterprise environments, the security model needs hardening: cap-drop ALL, read-only filesystems, and no-root execution at minimum. Docker is not a perfect security boundary against sophisticated prompt injection.

Can I use OpenClaw with local models instead of cloud APIs? Yes. OpenClaw supports Ollama and other local model providers alongside cloud APIs from OpenAI and Google. You configure the model provider in openclaw.json. Running local models means no data leaves your hardware, but inference speed depends on your GPU and RAM. Note that Anthropic restricted Claude access for OpenClaw in April 2026, so provider availability can shift.

Further reading

Want to work together?

I take on projects, advisory roles, and fractional CTO engagements in AI/ML. I also help businesses go AI-native with agentic workflows and agent orchestration.

Get in touch