Powerful terminal-based AI coding assistant with interactive and non-interactive modes, plugin system, and multi-language support.
Ollama Code CLI (@ollama-code/ollama-code) is a terminal-based AI coding assistant that provides intelligent code generation, file manipulation, and shell command execution capabilities through local LLM models via Ollama.
| Feature | Description |
|---|---|
| Interactive Chat | Real-time conversation with AI in terminal |
| Non-Interactive Mode | Scriptable, pipe-friendly execution |
| Tool System | 30+ built-in tools for file, code, and shell operations |
| Plugin System | Extensible via custom plugins and MCP servers |
| Subagents | Spawn specialized AI agents for complex tasks |
| Multi-Language | Supports English, Russian, Chinese UI |
| Themes | 15+ built-in color themes |
| Vim Mode | Vim-style navigation and editing |
| Session Management | Resume previous conversations |
| Sandbox Mode | Secure execution environment |
# Install globally
npm install -g @ollama-code/ollama-code
# Or run directly
npx @ollama-code/ollama-code
# Clone repository
git clone https://github.com/ollama-code/ollama-code.git
cd ollama-code
# Install dependencies
pnpm install
# Build CLI package
cd packages/cli
pnpm build
# Link globally
npm link
On first run, Ollama Code will guide you through setup:
ollama
The setup wizard will:
~/.ollama-code/config.jsonStart an interactive session:
# Start with default model
ollama
# Start with specific model
ollama --model llama3.2
# Start in specific directory
cd /path/to/project && ollama
Process a single prompt and exit:
# Single prompt
ollama --prompt "Explain this code" < myfile.ts
# Pipe input
echo "What is the capital of France?" | ollama
# With specific model
ollama --model deepseek-r1 --prompt "Solve this puzzle"
Continue a previous conversation:
# Show session picker
ollama --resume
# Resume specific session
ollama --resume session-abc123
Interactive mode provides a full-featured terminal UI for conversing with AI.
┌──────────────────────────────────────────────────────────────────┐
│ Ollama Code v0.11.0 │ model: llama3.2 │ /path/to/project │
├──────────────────────────────────────────────────────────────────┤
│ │
│ User: Write a function to sort an array │
│ │
│ Assistant: I'll help you write a function to sort an array... │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ function sortArray(arr: number[]): number[] { │ │
│ │ return [...arr].sort((a, b) => a - b); │ │
│ │ } │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
├──────────────────────────────────────────────────────────────────┤
│ > Type your message... (Enter to send) │
└──────────────────────────────────────────────────────────────────┘
| Feature | Description |
|---|---|
| Multi-line Input | Use Shift+Enter for new lines |
| History Navigation | Up/Down arrows for message history |
| Auto-completion | Tab for command completion |
| Reverse Search | Ctrl+R for history search |
Access commands by typing / in the input:
| Command | Description |
|---|---|
/help |
Show available commands |
/model |
Switch model |
/theme |
Change color theme |
/clear |
Clear conversation |
/compress |
Compress context |
/export |
Export conversation |
/memory |
Manage memory |
/mcp |
MCP server management |
/agents |
Manage subagents |
/settings |
Open settings |
/quit |
Exit application |
Use @ for quick actions:
| Command | Description |
|---|---|
@file path/to/file |
Include file in context |
@folder path/to/dir |
Include directory contents |
@git |
Include git context |
@url https://... |
Include URL content |
Non-interactive mode is designed for scripting and automation.
# Command-line prompt
ollama --prompt "Your question here"
# Stdin pipe
cat code.ts | ollama --prompt "Review this code"
# File redirect
ollama --prompt "Analyze" < data.txt
# Here document
ollama --prompt "$(cat <<EOF
Analyze this code:
$(cat myfile.ts)
EOF
)"
# Text output (default)
ollama --prompt "Hello"
# JSON output
ollama --prompt "Hello" --output json
# JSONL streaming
ollama --prompt "Hello" --input-format stream-json
For advanced automation, use stream-json format:
# Input via stdin
ollama --input-format stream-json
# Send control messages
echo '{"type":"initialize","config":{}}' | ollama --input-format stream-json
# Receive streaming output
{"type":"content","text":"Hello"}
{"type":"tool_call","name":"read_file","args":{}}
{"type":"tool_result","result":"file contents"}
{"type":"done"}
| Code | Description |
|---|---|
| 0 | Success |
| 1 | Error |
| 2 | User cancelled |
| 3 | Timeout |
# Switch model
/model llama3.2
# List available models
/model list
# Show current model info
/model info
# Read file
@file path/to/file.ts
# Write file (via prompt)
"Create a new file at src/utils.ts with helper functions"
# Edit file (via prompt)
"Add TypeScript types to src/api.ts"
# Search files (via prompt)
"Find all TypeScript files that import axios"
Tools can execute shell commands:
# Via prompt
"Run npm test and show results"
"Execute git status"
"Start the dev server"
# Via prompt
"Commit these changes with message 'Add feature'"
"Create a new branch feature/auth"
"Show git log for last 5 commits"
# Show memory
/memory show
# Refresh memory
/memory refresh
# Clear memory
/memory clear
# Export session
/export --format json
# Export to markdown
/export --format markdown
# Export to HTML
/export --format html
Configuration is stored in ~/.ollama-code/:
~/.ollama-code/
├── config.json # Main configuration
├── settings.json # User settings
├── memory/ # Memory storage
├── sessions/ # Session history
└── plugins/ # User plugins
{
"baseUrl": "http://localhost:11434",
"model": "llama3.2",
"embeddingModel": "nomic-embed-text",
"sessionId": "session-abc123"
}
{
"general": {
"outputLanguage": "en",
"debugMode": false,
"autoUpdate": true
},
"ui": {
"theme": "ollama-dark",
"hideWindowTitle": false
},
"tools": {
"useRipgrep": true,
"useBuiltinRipgrep": true
},
"security": {
"auth": {
"useExternal": false
}
}
}
| Variable | Description | Default |
|---|---|---|
OLLAMA_HOST |
Ollama server URL | localhost:11434 |
OLLAMA_BASE_URL |
Full Ollama URL | http://localhost:11434 |
OLLAMA_MODEL |
Default model | llama3.2 |
OLLAMA_CODE_DEBUG |
Enable debug mode | false |
OLLAMA_CODE_NO_RELAUNCH |
Disable auto-relaunch | false |
ollama [options]
Options:
--model <name> Use specific model
--prompt <text> Non-interactive prompt
--resume [session] Resume session
--output <format> Output format (text|json)
--input-format <format> Input format (text|stream-json)
--debug Enable debug mode
--no-sandbox Disable sandbox
--extensions <paths> Load extensions
--help Show help
--version Show version
Ollama Code includes 15+ built-in themes for personalized appearance.
| Theme | Description |
|---|---|
default |
Default dark theme |
ollama-dark |
Ollama branded dark |
ollama-light |
Ollama branded light |
dracula |
Dracula color scheme |
nord |
Nord color scheme |
tokyo-night |
Tokyo Night theme |
catppuccin |
Catppuccin Mocha |
github-dark |
GitHub Dark |
github-light |
GitHub Light |
atom-one-dark |
Atom One Dark |
ayu |
Ayu Mirage |
ayu-light |
Ayu Light |
xcode |
Xcode theme |
googlecode |
Google Code theme |
shades-of-purple |
Purple-based theme |
no-color |
No colors (monochrome) |
# Interactive
/theme
# Direct
/theme dracula
Add custom themes in settings:
{
"ui": {
"customThemes": {
"my-theme": {
"colors": {
"primary": "#ff6b6b",
"background": "#1a1a2e",
"text": "#eaeaea"
}
}
}
}
}
Ollama Code supports a plugin system for extending functionality.
| Location | Description |
|---|---|
~/.ollama-code/plugins/ |
User plugins |
.ollama-code/plugins/ |
Project plugins |
| Built-in | Core, Dev, File, Search, Shell tools |
# Install from npm
/extensions install @ollama-code/plugin-example
# Install from local path
/extensions link /path/to/plugin
# Install from URL
/extensions install https://github.com/user/plugin
# List installed
/extensions list
# Enable extension
/extensions enable my-extension
# Disable extension
/extensions disable my-extension
# Update extension
/extensions update my-extension
# Uninstall extension
/extensions uninstall my-extension
Plugin structure:
my-plugin/
├── ollama-extension.json # Plugin manifest
├── index.ts # Main entry
├── commands/ # Custom commands
│ └── myCommand.ts
├── tools/ # Custom tools
│ └── myTool.ts
└── skills/ # AI skills
└── mySkill.md
Manifest (ollama-extension.json):
{
"name": "my-plugin",
"version": "1.0.0",
"description": "My custom plugin",
"main": "dist/index.js",
"commands": ["commands/*.js"],
"tools": ["tools/*.js"],
"skills": ["skills/*.md"]
}
Subagents are specialized AI agents that can handle complex, multi-step tasks.
# Open subagent manager
/agents
# Create new subagent
/agents create
# View subagent details
/agents view <name>
# Delete subagent
/agents delete <name>
| Agent | Description |
|---|---|
architect |
System design and architecture |
debugger |
Debugging and error analysis |
tester |
Test generation and execution |
reviewer |
Code review and quality |
# Via prompt
"Use the architect agent to design an authentication system"
"Spawn a debugger agent to fix this error"
"Have the reviewer agent check this PR"
Create custom subagents:
# .ollama-code/agents/myAgent.md
# My Custom Agent
You are a specialized agent for [purpose].
## Capabilities
- Capability 1
- Capability 2
## Instructions
1. Step 1
2. Step 2
Model Context Protocol (MCP) allows integration with external tools and services.
# List MCP servers
/mcp list
# Add MCP server
/mcp add <name> <command>
# Remove MCP server
/mcp remove <name>
# Show MCP status
/mcp status
# Add filesystem MCP server
/mcp add filesystem npx -y @modelcontextprotocol/server-filesystem /path/to/dir
# Add GitHub MCP server
/mcp add github npx -y @modelcontextprotocol/server-github
# Add PostgreSQL MCP server
/mcp add postgres npx -y @modelcontextprotocol/server-postgres
MCP servers are configured in settings:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"],
"env": {}
}
}
}
For MCP servers requiring OAuth:
# Authenticate with MCP server
/mcp auth <server-name>
# Check auth status
/mcp auth-status <server-name>
| Shortcut | Action |
|---|---|
Ctrl+C |
Cancel current operation |
Ctrl+D |
Exit application |
Ctrl+L |
Clear screen |
Ctrl+R |
Reverse search history |
Escape |
Cancel/close dialog |
| Shortcut | Action |
|---|---|
Enter |
Send message |
Shift+Enter |
New line |
Up |
Previous history item |
Down |
Next history item |
Tab |
Auto-complete |
Ctrl+V |
Paste from clipboard |
When Vim mode is enabled:
| Shortcut | Action |
|---|---|
Esc |
Normal mode |
i |
Insert mode |
:w |
Save |
:q |
Quit |
j/k |
Navigate history |
/ |
Search |
| Shortcut | Action |
|---|---|
Tab |
Next element |
Shift+Tab |
Previous element |
Enter |
Confirm |
Escape |
Cancel |
Arrow keys |
Navigate list |
Symptoms: “Cannot connect to Ollama server”
Solutions:
curl http://localhost:11434/api/tags
OLLAMA_HOST environment variableSymptoms: “Model not found: llama3.2”
Solutions:
ollama list
ollama pull llama3.2
/model <available-model>
Symptoms: Slow performance, crashes
Solutions:
/compress
/clear
NODE_OPTIONS="--max-old-space-size=4096" ollama
Symptoms: Garbled output, missing characters
Solutions:
reset
/theme default
Enable debug logging:
# Via flag
ollama --debug
# Via environment
OLLAMA_CODE_DEBUG=1 ollama
Debug logs are saved to:
~/.ollama-code/logs/session-<id>.log
/helpApache License 2.0