API Keys & Models
Three providers. One .env file. A model stack designed to keep costs low and quality high.
ANTHROPIC_API_KEY, OPENAI_API_KEY, and GOOGLE_API_KEY. Store them in ~/.openclaw/.env — never paste them directly into JSON. The model stack uses Claude Sonnet as the primary brain, GPT-5.4 for deep reasoning, and Haiku/Gemini Flash for cheap background tasks.
The .env File
OpenClaw loads environment variables from ~/.openclaw/.env at startup. This is where your keys live. Never put raw key values in openclaw.json — always reference them as variables.
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AIza...
In openclaw.json, reference them like this:
"apiKey": "${ANTHROPIC_API_KEY}"
This keeps secrets out of version control and config files. If you ever share your openclaw.json, no keys leak.
Anthropic (Claude) — Primary Brain
Get your key at console.anthropic.com. Go to API Keys, create a new key, copy it immediately — it won't show again.
The model ID to use:
anthropic/claude-sonnet-4-6
Claude Sonnet 4.6 is the default for everything: conversations, orchestration, coding decisions, writing, tool use. It's the best balance of speed, capability, and cost in the stack.
OpenAI (GPT-5.4) — Deep Reasoning
Get your key at platform.openai.com. Go to API Keys, create a project key with appropriate permissions.
The model ID:
openai/gpt-5.4
Best for: complex debugging, novel algorithm design, anything that requires sustained multi-step reasoning that Claude struggles with. GPT-5.4 is noticeably more expensive — treat it as a specialist you call in when needed, not the default.
Google — Background Tasks
Get your key at aistudio.google.com. Create an API key for Gemini access.
Use Gemini Flash for cheap background work — nightly extraction, triage, summaries. It's very inexpensive and fast for structured tasks.
{"tool": "exec", "command": "..."} as text rather than running anything. This went undiagnosed for days. The fix was switching back to Claude.Rule: if your AI outputs raw JSON when you ask it to run a command, switch models immediately.
Model Aliases in openclaw.json
Set your defaults in the agents.defaults.model section. This tells OpenClaw what to use for primary tasks and what to fall back to if a provider is unavailable:
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-6",
"fallbacks": ["anthropic/claude-haiku-4-5-20251001", "openai/gpt-5.4"]
}
}
}
The fallback order matters. Haiku comes before GPT-5.4 because it's much cheaper — if Sonnet is unavailable, you want the cheap fallback first, not the expensive one.
The Three-Tier Model Stack
| Tier | Model | Use Case | Cost |
|---|---|---|---|
| Main brain | Claude Sonnet 4.6 | Orchestration, coding, decisions | Medium |
| Deep reasoning | GPT-5.4 | Complex debugging, algorithm design | Medium-High |
| Background | Haiku / Gemini Flash | Cron jobs, email triage, extraction | Very low |
Cost Reality
Real numbers from building this system: roughly $40 in Anthropic credits for a weekend of heavy testing and iteration — Claude covered everything. Day-to-day running costs are much lower once the system is stable.
The biggest cost risk is coding agents. A poorly scoped PRD (Product Requirements Document — a short written spec that tells the agent exactly what to build and what success looks like) fed to a coding agent will cause expensive loops — the agent iterates, hits ambiguity, asks for clarification, iterates again. Before you spin up an agent for a build task, write a clear spec. It saves both time and money.
- Set a monthly spending limit on each provider dashboard
- Anthropic Console: set a hard limit at a level you're comfortable with
- OpenAI Platform: billing alerts at a low threshold since GPT-5.4 costs more
- Review usage weekly until you know your baseline
Questions & Suggestions
Have a question about this page? Spotted something wrong? Want to suggest an improvement? We read everything and respond to all paid-tier questions.