Hermes Agent installs in one command. Seriously. You’ll spend more time opening your terminal than waiting for the installation to finish.
No Docker complexity. No multi-step wizards. No “run this, then this, then debug this” nightmare. One line, and it’s running.
Let’s do this.
Before You Start: Check Your Specs
You need:
- 4GB RAM minimum (8GB recommended, 16GB+ if you’re running local models)
- 2GB free disk space (more if using local LLMs)
- Linux, macOS, WSL2, or Android (Termux)
- Bash shell (standard on all of these)
Don’t have these? You can still run Hermes on less powerful hardware—it’ll just be slower. The 4GB is a practical minimum for local LLM inference. If you’re only using cloud APIs, 2GB RAM is fine.
Step 1: Run the Installation Script (One Command)
Open your terminal and run:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
That’s it. The script will:
- Clone the Hermes repository
- Install dependencies (Python, required libraries)
- Create the
.hermesdirectory in your home folder - Set up your PATH so the
hermescommand works globally
On most machines, this takes 2-5 minutes depending on your internet speed.
What it installs:
- The Hermes Agent CLI tool
- Python runtime (if not already installed)
- Configuration directories
After it finishes: You’ll see:
✓ Hermes Agent installed successfully
✓ Run 'hermes --version' to verify
Step 2: Verify Installation Worked
hermes --version
You should see something like:
Hermes Agent v0.8.1
Built with ❤️ by Nous Research
If you get “command not found”, you might need to restart your terminal or add the installation path to your shell config. (This is rare, but we’ll cover it in the troubleshooting section.)
Step 3: Initialize Hermes
Now run the setup wizard:
hermes setup
This interactive script will ask you:
1. Choose your LLM provider
Options:
local— Use Ollama (free, private, see Article 9)openai— Use OpenAI’s API ($)anthropic— Use Claude’s API ($)openrouter— Unified API provider ($, but supports many models)
Pick based on your preference:
- First time? Choose
local(requires Ollama installed, but free and fast) - Have an OpenAI key? Choose
openai - Want to test with local? You can change this later
2. Configure your provider
Depending on what you chose:
If Local Ollama:
Provider: local
Ollama endpoint: http://localhost:11434
Model name: mistral
(We’ll install Ollama in Article 9. For now, skip if you don’t have it.)
If OpenAI:
API Key: sk-xxxxxxxxxxxxxxx (paste your key)
Model: gpt-4
If Anthropic:
API Key: sk-ant-xxxxx
Model: claude-opus-4-7
3. Choose your first platform
Options:
cli— Just use the terminal (easiest for testing)discord— Connect a Discord botslack— Connect a Slack bottelegram— Connect a Telegram botskip— Set up later
For your first time: Choose cli. It’s instant and lets you test without any bot tokens.
The setup wizard will create a config file at ~/.hermes/config.yml. You can edit this manually later, but the wizard handles 99% of what you need.
Step 4: Start Using Hermes
After setup, just run:
hermes
You’ll enter interactive mode. You can type questions directly:
You: What is Hermes Agent?
Hermes: Hermes Agent is a self-improving autonomous AI assistant...
Type exit or quit to leave.
That’s it. You’re running Hermes.
What Just Happened
You now have:
- ✅ Hermes Agent CLI installed globally
- ✅ Configuration stored in
~/.hermes/ - ✅ An LLM provider configured (local, OpenAI, or Anthropic)
- ✅ Ability to chat directly via CLI
The memory system is already working. Everything Hermes learns is stored in ~/.hermes/memory/.
Next Steps: Connect a Platform (Optional for Now)
You can chat in the terminal forever if you want. But Hermes gets more useful when it lives on Discord, Slack, or Telegram.
We’ll cover that in Article 4. For now, you have a working Hermes instance. Test it out.
Customizing Your Setup (Intermediate)
If you want to tweak settings without running the wizard again, edit your config:
nano ~/.hermes/config.yml
Common tweaks:
# Increase context window for longer conversations
llm:
context_window: 8192 # Default is 4096
# Enable streaming responses (faster feeling)
llm:
streaming: true
# Set inference parameters
llm:
temperature: 0.7 # Creativity (0=deterministic, 1=creative)
top_p: 0.9 # Diversity of responses
Save and restart Hermes for changes to take effect.
Hardware Considerations
Do I need a GPU? Not required. Hermes works fine on CPU (slower, though).
Can I run on a laptop? Yes. It’ll use some memory, but totally viable.
What about a Raspberry Pi? Technically possible, but very slow. Better to use cloud LLM provider.
Should I use a VPS/cloud server? Yes, if you want it running 24/7 without your laptop. Install the same way, configure via SSH.
Troubleshooting Installation
“curl: command not found” You’re on a system without curl. Install it:
- macOS:
brew install curl - Ubuntu:
sudo apt install curl - Fedora:
sudo dnf install curl
“Permission denied” on install The script doesn’t need sudo. If you’re getting permission issues, you might have a PATH problem. Try:
export PATH="$HOME/.hermes/bin:$PATH"
source ~/.bashrc
“hermes: command not found” after install Restart your terminal, or run:
source ~/.bashrc # Linux/WSL
source ~/.zshrc # macOS (if using zsh)
Installation hangs at “Installing dependencies” Your internet might be slow. Wait 5 minutes. If it still hangs, cancel (Ctrl+C) and try again.
“Python not found” error Hermes needs Python 3.8+. Install it:
- macOS:
brew install python3 - Ubuntu:
sudo apt install python3 - Windows: Download from python.org
Verify Everything Works
Run this command to check your installation:
hermes status
You should see something like:
Hermes Agent Status:
✓ Installation: OK
✓ Configuration: OK
✓ LLM Provider: Connected (openai)
✓ Memory: Ready
✓ Platforms: 0 connected
If everything says ✓, you’re good to go.
What to Read Next
- Test with Ollama — Set up local LLM inference (free, private)
- Connect to Discord — Make your bot live on Discord
- How Hermes Works — Understand the learning loop you just installed
That’s it. Hermes is running. The hard part? Over. Now you can play with it, connect it to platforms, and watch it learn over time.
Related Articles
Deepen your understanding with these curated continuations.
Hermes Agent Setup Checklists: Personal, Team & Production
Three copy-paste checklists for Hermes Agent. Personal setup (15 min), team deployment (1 hr), and production security (before go-live).
Hermes Agent Config Templates: 5 Copy-Paste Ready Setups
Ready-to-use Hermes Agent config files for personal, team, production, enterprise, and hybrid setups. Copy, paste, adjust one value, done.
Advanced Hermes Agent: Optimization, Scaling & Learning Loop Tuning
Make your Hermes Agent production-grade. Optimize the learning loop, scale to thousands of users, and tune every parameter.