Step-by-step setup to run OpenClaw entirely on local models. No API costs. Full privacy. Nothing leaves your machine. Works on most modern hardware with 16GB+ RAM.
| Setup | RAM | GPU | Best Model | Speed |
|---|---|---|---|---|
| Minimum | 16GB | Not required | qwen2.5:7b | Slow but works |
| Recommended | 32GB | 8GB VRAM | qwen2.5:32b | Good |
| Optimal | 64GB+ | 24GB VRAM | qwen2.5:72b | Fast |
| VPS (cloud) | 32GB RAM | N/A | qwen2.5:32b | Good |
Don't have the hardware? A 32GB RAM VPS from Hostinger runs ~$20/month — still cheaper than most API bills.
Not sure if your hardware can handle Ollama? Post your specs in the community and we'll tell you.
Get help from real practitioners doing this every day.
Ollama is a tool that makes running large language models on your own hardware as simple as running a command. It handles downloading models, managing memory, and providing an API that OpenClaw can talk to.
Install on Linux (including your VPS):
curl -fsSL https://ollama.ai/install.sh | shVerify the installation:
ollama --version
# Should show: ollama version 0.x.xStart the Ollama service:
# Start as a background service
systemctl start ollama
systemctl enable ollama # Start on boot
# Or run manually in a terminal
ollama serveWHY THIS MATTERS
Got Ollama running? Share your model and hardware setup in the community.
Get help from real practitioners doing this every day.
Download All Formats — Free
PDF guide, config templates, and checklist
Up Next
Guide 04: SOUL.md, AGENTS.md & HEARTBEAT.md Explained
Master the three config files that make your agent 10x more capable.