Getting Started
Quick Install
Download the latest release for your platform:
| Platform | Format | Download |
|---|---|---|
| Linux | AppImage | Termaid-1.3.6.AppImage |
| Linux | Debian/Ubuntu | termaid_1.3.6_amd64.deb |
| macOS | DMG (ARM) | Termaid-1.3.6-arm64.dmg |
| Windows | Installer | Termaid.Setup.1.3.6.exe |
See all versions on the Releases page.
Linux (AppImage)
chmod +x Termaid-1.3.6.AppImage
./Termaid-1.3.6.AppImageLinux (Debian/Ubuntu)
sudo dpkg -i termaid_1.3.6_amd64.debmacOS
Open the .dmg file and drag Termaid to your Applications folder.
Windows
Run the Termaid.Setup.1.3.6.exe installer and follow the steps.
Development Setup
If you want to run Termaid from source:
Prerequisites
- Node.js 18+ and npm
- Ollama installed and running (for local use)
- Python 3 and make (for node-pty compilation on Linux)
Clone and install
git clone https://github.com/openhoat/termaid.git
cd termaid
npm installConfigure an LLM provider
Termaid supports multiple providers. Choose one (or more):
Option A — Ollama (local or remote)
Visit ollama.ai and follow the installation instructions for your operating system.
ollama serve
ollama pull llama3.2:3b # recommended default modelRecommended Ollama Models:
| Model | Size | Use Case |
|---|---|---|
llama3.2:3b | 3B | Default - Best balance of speed and quality |
llama3.1:8b | 8B | Better quality, requires more RAM |
mistral:7b | 7B | Good alternative, strong reasoning |
qwen2.5:3b | 3B | Lightweight alternative, fast responses |
You can change the model in the configuration panel or via the TERMAID_OLLAMA_MODEL environment variable.
If you're using Ollama on a remote machine, configure the URL in the Termaid configuration panel.
Option B — Claude (Anthropic API)
Get an API key from console.anthropic.com and set it as an environment variable:
export ANTHROPIC_API_KEY=sk-ant-...Then select Claude as the provider in the configuration panel.
Option C — OpenAI
Get an API key from platform.openai.com and set it as an environment variable:
export TERMAID_OPENAI_API_KEY=sk-...Then select OpenAI as the provider in the configuration panel.
Run the application
npm run devThis will start:
- The Vite development server (port 5173)
- The Electron application
Linux with Wayland
On Linux with Wayland, you may encounter warnings related to Wayland/Vulkan compatibility. To force X11 usage:
ELECTRON_OZONE_PLATFORM_HINT=x11 npm run dev