Skip to content

Architecture

Termaid is an Electron desktop application with a React frontend. This page describes the high-level architecture, the main modules, and how data flows through the application.

Overview

Project Structure

termaid/
├── src/                    # Renderer process (React)
│   ├── components/         # React components
│   ├── services/           # Frontend services (LLM, chat, terminal, config)
│   ├── store/              # Zustand state management
│   ├── hooks/              # Custom React hooks
│   ├── locales/            # i18n translations (en, fr)
│   ├── types/              # Frontend-specific types
│   └── utils/              # Utilities (logger)
├── electron/               # Main process (Electron)
│   ├── main.ts             # App entry point, window creation
│   ├── preload.ts          # Context bridge (IPC API)
│   ├── ipc-handlers/       # IPC handler modules
│   │   ├── providers/      # LLM provider implementations
│   │   ├── llm-service.ts  # LLM IPC routing
│   │   ├── terminal.ts     # PTY terminal management
│   │   ├── config.ts       # Configuration persistence
│   │   └── conversation.ts # Conversation storage
│   ├── services/           # Backend services
│   └── prompts/            # LLM prompt templates
├── shared/                 # Shared between renderer and main
│   ├── types.ts            # Shared TypeScript interfaces
│   ├── config.ts           # Default config, env var merging
│   ├── ansi.ts             # ANSI escape code utilities
│   └── promptDetection.ts  # Shell prompt detection
├── docs/                   # VitePress documentation
└── tests/                  # E2E tests (Playwright)

Renderer Process (Frontend)

The renderer runs in a sandboxed Chromium context with no direct Node.js access. It communicates with the main process exclusively through the preload bridge.

Components

ComponentFileRole
Appsrc/App.tsxRoot layout: header, terminal, chat panel, config
Terminalsrc/components/Terminal.tsxxterm.js terminal emulator
ChatPanelsrc/components/ChatPanel.tsxAI chat interface with conversation history
Headersrc/components/Header.tsxApp header with action buttons
ConfigPanelsrc/components/ConfigPanel.tsxSettings modal (provider, model, theme)
Resizersrc/components/Resizer.tsxDraggable panel resizer
AICommandsrc/components/chat/AICommand.tsxRenders AI-generated shell commands
ChatMessagesrc/components/chat/ChatMessage.tsxRenders conversation messages
CommandInterpretationsrc/components/chat/CommandInterpretation.tsxRenders command output analysis

Services

ServiceFileRole
LLMServicesrc/services/llmService.tsLLM API wrapper with response caching
chatServicesrc/services/chatService.tsChat flow orchestration
commandExecutionServicesrc/services/commandExecutionService.tsShell command execution and output capture
terminalServicesrc/services/terminalService.tsTerminal lifecycle management
configServicesrc/services/configService.tsConfiguration read/write
ollamaServicesrc/services/ollamaService.tsOllama-specific helpers (model listing)

State Management

The application uses a single Zustand store (src/store/useStore.ts) that manages:

  • Config — active LLM provider settings, theme, font size, shell
  • Terminal — PTY process ID, captured output buffer
  • AI — current AI command response, loading state, errors
  • Conversations — list, current conversation, CRUD operations
  • UI — config panel visibility, selected command

Main Process (Backend)

The Electron main process handles system-level operations through IPC handlers.

IPC Handlers

Each handler module registers ipcMain.handle() listeners:

HandlerChannel prefixRole
terminalterminal:*Create/write/resize/destroy PTY processes
llm-servicellm:*Route LLM requests to the active provider
configconfig:*Persist/retrieve config via electron-store
conversationconversation:*CRUD for conversation history (JSON files)
videoDesktop capture for demo recording

LLM Provider System

LLM providers follow a strategy pattern with a shared base class:

BaseLLMProvider (electron/ipc-handlers/providers/base-provider.ts) implements:

  • generateCommand() — structured output with Zod schema validation and fallback parsing
  • explainCommand() — free-text explanation of a shell command
  • interpretOutput() — structured analysis of command output

Each provider only needs to initialize its LangChain chat model and implement testConnection() and listModels().

Preload Bridge

The preload script (electron/preload.ts) uses contextBridge.exposeInMainWorld() to expose a typed window.electronAPI object. This is the only communication channel between renderer and main process, ensuring full context isolation and sandbox security.

Shared Module

The shared/ directory contains code used by both processes:

FilePurpose
types.tsCore interfaces: AppConfig, AICommand, Conversation, ConversationMessage
config.tsDefault configuration, environment variable loading and merging
ansi.tsANSI/OSC escape code stripping for clean terminal output
promptDetection.tsShell prompt pattern detection (bash, zsh, fish, PowerShell)

Data Flow

Command Generation

Command Execution

Technology Stack

LayerTechnology
Desktop shellElectron 40
Frontend frameworkReact 19
LanguageTypeScript 5.9
Terminal emulatorxterm.js
State managementZustand
LLM integrationLangChain.js
LLM providersOllama, Anthropic Claude, OpenAI
Build toolVite
Packagingelectron-builder
TestingVitest + Testing Library (unit), Playwright (E2E)
LintingBiome
DocumentationVitePress
i18nreact-i18next
Config storageelectron-store

Security Services

Sandbox Service

The sandbox service (shared/sandbox.ts) provides multiple levels of isolation for command execution:

Sandbox Modes:

ModeDescriptionUse Case
noneDirect execution, no isolationTrusted commands only
restrictedLimited environment with blocked commandsMost commands, safe default
dockerContainer isolation with DockerUntrusted commands, testing
systemOS sandbox (firejail on Linux)Maximum isolation

Features:

  • Command whitelisting and blacklisting
  • Timeout enforcement
  • Environment variable restriction
  • Read-only mount options (Docker)
  • Automatic container cleanup (Docker --rm flag)

Audit Service

The audit service (electron/services/auditService.ts) logs all command executions for security and compliance:

Logged Data:

  • Command string and execution result (success/blocked/cancelled/error)
  • Risk level (safe/warning/dangerous)
  • User approval status and action
  • Execution time and output length
  • Sandbox mode used

Features:

  • Query logs by date, result, or risk level
  • Export to JSON or CSV
  • Statistics API (success rate, top commands, averages)
  • Automatic log rotation (configurable max entries)

Provider Registry

The provider registry (electron/ipc-handlers/providers/registry.ts) manages LLM provider registration and discovery:

Responsibilities:

  • Register/unregister providers (Ollama, Claude, OpenAI)
  • Validate provider configurations
  • Create provider instances
  • Test connections and list available models
  • Expose provider metadata (name, icon, features)

Usage:

typescript
// Register a provider
providerRegistry.register(ollamaFactory)

// Get provider info
const infos = await providerRegistry.getProviderInfos(configs)

// Create provider instance
const provider = providerRegistry.createProvider('ollama', config)

Security Model

  • Context isolation: The renderer has no direct access to Node.js APIs
  • Sandbox enabled: The renderer runs in a sandboxed Chromium process
  • Command validation: All AI-generated commands are validated before execution
  • Sandbox execution: Commands can run in isolated environments (restricted, Docker, system)
  • Audit logging: All command executions are logged with detailed metadata
  • No automatic execution: AI-generated commands require explicit user validation before execution
  • Local-first: Ollama support allows fully offline, privacy-preserving usage
  • API keys stored locally: Configuration persisted via electron-store on the user's machine

Security Services

Command Validation Service

File: shared/commandValidation.ts

Validates AI-generated commands with risk assessment:

Risk LevelDescription
safeRead-only commands
⚠️ warningRequires attention
🚫 dangerousBlocked by default

Risk Categories: File deletion, System modification, Network operations, Privilege escalation, Disk operations, Process control, Data destruction, Configuration changes.

Sandbox Service

File: shared/sandbox.ts

Multiple isolation levels for command execution:

ModeDescriptionTimeout
noneDirect execution60s
restrictedLimited environment30s
dockerContainer isolation60s
systemLinux sandbox (firejail)60s

Audit Service

File: electron/services/auditService.ts

Comprehensive audit logging: command, result, risk level, execution time, sandbox mode, export to JSON/CSV.

Provider Registry

File: electron/ipc-handlers/providers/registry.ts

Centralized LLM provider management using Factory pattern:

MethodPurpose
register()Register provider factory
list()List all providers
createProvider()Instantiate with config
testConnection()Test provider connection

Released under the MIT License.