OpenClaw with OpenRouter: Use Any AI Model
Configure OpenClaw with OpenRouter to access 100+ LLM models through a single API key. Setup guide, model recommendations, and cost optimization tips.
OpenClaw OpenRouter setup unlocks access to over 100 LLM models through a single API key and billing account. Instead of managing separate API keys for Anthropic, OpenAI, DeepSeek, Mistral, and others, OpenRouter acts as a unified gateway that handles authentication and billing. This is particularly useful if you want to experiment with different models or need a fallback when your primary provider has downtime.
Why Use OpenRouter with OpenClaw?
- Single API key for 100+ models
- Automatic fallback: Route to backup models if your primary is down
- Cost transparency: Per-request cost tracking across all providers
- Access to models not available directly: Some providers only distribute via OpenRouter
Getting an OpenRouter API Key
- Sign up at openrouter.ai
- Navigate to Keys → Create Key
- Set a credit limit (optional but recommended)
- Copy your key (starts with
sk-or-)
Configuring OpenClaw for OpenRouter
{
"llm": {
"provider": "openrouter",
"api_key": "sk-or-YOUR_KEY_HERE",
"model": "anthropic/claude-3-5-sonnet",
"fallback_models": [
"openai/gpt-4o",
"deepseek/deepseek-chat"
]
}
}
The fallback_models array configures automatic fallback if the primary model is unavailable.
Recommended Models via OpenRouter
| Use Case | Model | Approx Cost/1M tokens |
|---|---|---|
| Best quality | anthropic/claude-3-5-sonnet | $3.00 |
| Fast + cheap | deepseek/deepseek-chat | $0.27 |
| Code tasks | anthropic/claude-3-5-haiku | $0.80 |
| Research | perplexity/sonar-large-online | $1.00 |
| Free tier | google/gemini-flash-1.5-8b | Free |
Model Routing Strategies
OpenRouter supports several routing strategies that OpenClaw can configure:
Lowest latency: Route to the fastest available provider for a given model.
Lowest price: Route to the cheapest available provider (useful for models hosted by multiple providers).
Fallback: Try models in order until one succeeds.
Configure in OpenClaw:
"openrouter_options": {
"route": "fallback",
"transforms": ["middle-out"]
}
Cost Tracking
OpenRouter provides a usage dashboard at openrouter.ai/activity showing per-model costs. This is useful for understanding which OpenClaw workflows are most expensive.
nacre.sh and OpenRouter
nacre.sh supports OpenRouter as a BYOK provider. Select "OpenRouter" from the LLM provider dropdown and paste your key. You can switch between OpenRouter and direct provider keys at any time from the dashboard.
Frequently Asked Questions
Is OpenRouter slower than using providers directly?
OpenRouter adds approximately 20–50ms of latency per request for routing and billing. For most agent use cases, this is imperceptible.
Does OpenRouter have its own usage limits?
OpenRouter applies rate limits per account tier. The free tier has lower limits; paid accounts with credits loaded have higher limits. Contact OpenRouter for enterprise rate limits.
Can I use OpenRouter to access models in specific geographic regions?
Yes. Some OpenRouter providers allow specifying datacenter regions, which can be important for latency or data residency requirements.
nacre.sh
Run OpenClaw without the server headaches
Dedicated instance, automatic TLS, nightly backups, and 290+ LLM integrations. Live in under 90 seconds from $12/month.
Deploy your agent →