OpenClaw Configuration File: Every Setting Explained
Complete reference for OpenClaw's openclaw.json configuration file. Every section and setting explained with examples and recommended values.
The openclaw.json configuration file is the control center for your OpenClaw instance. Located at ~/.openclaw/openclaw.json, it controls everything from LLM provider settings to channel configuration, security policies, logging, and skill management. This reference explains every major section and setting.
Top-Level Structure
{
"llm": { ... },
"channels": { ... },
"skills": { ... },
"memory": { ... },
"security": { ... },
"web_interface": { ... },
"logging": { ... },
"scheduler": { ... }
}
LLM Section
{
"llm": {
"provider": "anthropic",
"api_key": "sk-ant-...",
"model": "claude-3-5-sonnet-20261022",
"max_tokens": 8192,
"temperature": 0.7,
"context_window": 200000,
"context_truncation": {
"enabled": true,
"max_tokens": 50000,
"strategy": "summarize_oldest"
},
"system_prompt": "You are a helpful personal assistant. Be concise and accurate."
}
}
temperature: Controls randomness (0 = deterministic, 1 = creative). Use 0.3–0.5 for factual tasks, 0.7–0.9 for creative work.
system_prompt: Your agent's core personality and instructions. Changes here affect all conversations.
Channels Section
See individual channel setup guides for detailed configuration. The channels section supports: telegram, discord, slack, whatsapp, signal, and web (built-in web chat interface).
{
"channels": {
"telegram": {
"enabled": true,
"token": "...",
"allowed_users": [123456789],
"parse_mode": "MarkdownV2"
}
}
}
Memory Section
{
"memory": {
"enabled": true,
"backend": "sqlite",
"path": "~/.openclaw/memory.db",
"vector_search": true,
"max_memories": 10000,
"auto_summarize": true,
"summarize_after_messages": 100
}
}
vector_search: Enables semantic memory search — finding relevant memories based on meaning, not just keywords. Requires the memory skill from ClawHub.
Security Section
{
"security": {
"allowed_origins": ["https://yourdomain.com"],
"canvas_host": {
"enabled": true,
"sandbox_level": "strict"
},
"tools": {
"allow": ["read_file", "write_file", "web_search"],
"deny": ["execute_command"]
}
}
}
Scheduler Section
{
"scheduler": {
"enabled": true,
"timezone": "America/New_York",
"tasks": [
{
"name": "morning_briefing",
"cron": "0 8 * * *",
"action": "run_skill",
"skill": "daily-briefing",
"output_channel": "telegram"
}
]
}
}
Web Interface Section
{
"web_interface": {
"enabled": true,
"port": 8080,
"host": "127.0.0.1",
"auth": {
"enabled": true,
"type": "totp",
"secret": "YOUR_TOTP_SECRET"
}
}
}
Always bind to 127.0.0.1 (not 0.0.0.0) and use nginx as a reverse proxy.
Frequently Asked Questions
Where can I find a complete openclaw.json schema?
The full JSON Schema is at openclaw/schemas/config.schema.json in the OpenClaw repository. Use it with VS Code for auto-complete and validation when editing your config.
Does OpenClaw hot-reload the config file?
Most settings are loaded at startup. Use python -m openclaw reload to apply config changes without a full restart. Some settings (LLM provider, port) require a restart.
Can I split configuration across multiple files?
Yes. Use "include": "path/to/partial-config.json" to include additional config files. Useful for keeping sensitive credentials in a separate file with restricted permissions.
nacre.sh
Run OpenClaw without the server headaches
Dedicated instance, automatic TLS, nightly backups, and 290+ LLM integrations. Live in under 90 seconds from $12/month.
Deploy your agent →