Bridge the gap between "what I want to do" and "how do I write that command?" with an intelligent terminal assistant that translates natural language into executable bash commands.
Join the growing community of developers who are saying hi to their shell.
| Provider | Model | Commands | Avg Response |
|---|---|---|---|
| Cloud | — | 38 | 4.5s |
| Cloud | nemotron-3-nano-30b-a3b:free | 6 | 5.2s |
| Cloud | GLM-4.5-air | 4 | 7.5s |
| Cloud | GLM-4.7 | 1 | 23.2s |
Whether you're a terminal veteran or a newcomer, hi-shell provides a fast, AI-powered way to generate and execute commands safely.
Run models locally using candle with hardware acceleration (Metal/CUDA). Supports Llama, Phi-3, and Qwen2 architectures.
Connect to your own Ollama or LM Studio instance for complete privacy and control.
Seamless integration with OpenRouter, Gemini, and Anthropic for powerful cloud-based models.
A dedicated shell environment for continuous assistance and iterative command building.
Dangerous commands are flagged, and confirmation is required before execution. Your system stays safe.
Optimized for speed with hardware acceleration support. Get your commands in milliseconds.
Choose your preferred installation method. We detect your operating system automatically.
curl -sSL https://raw.githubusercontent.com/tufantunc/hi-shell/main/install.sh | bash brew tap tufantunc/tap && brew install hi-shell scoop bucket add hi-shell https://github.com/tufantunc/scoop-bucket && scoop install hi-shell cargo install hi-shell Run the initialization command to set up your preferred LLM provider:
hi-shell --initJust prefix your natural language request with hi-shell and let
the magic happen.
Get quick answers directly from your command line. Just describe what you want to do in natural language and get the exact command you need.
Start a dedicated shell environment for continuous assistance. The context is preserved between commands, so you can refine your requests naturally like a conversation.
Dangerous commands are automatically detected and flagged with a warning. Confirmation is always required before execution, keeping your system safe.
Your data and privacy are our top priority. Everything is open source, transparent, and fully under your control.
All source code is fully available under MIT license. Inspect, modify, distribute, and improve it freely.
Connect to your own Ollama or LM Studio instance. Run completely offline without any internet connection.
Run Llama, Phi-3, and Qwen2 models locally on your own hardware with Metal/CUDA acceleration.
We never track your commands, prompts, or outputs. Only anonymous system information is collected (opt-in).
Completely opt-in anonymous usage statistics. No personal data is ever collected. Disable anytime.
Fully inspectable code. Every commit, change, and discussion is visible on GitHub.
Active community with issues and PRs ensures continuous oversight and improvement. Security vulnerabilities are quickly identified and fixed.
Dangerous commands are automatically detected and flagged. Confirmation is always required before execution. Your system stays protected by default.
Inspect the code, run it locally, contribute improvements. Everything is under your control.
Stop searching for commands. Start describing what you want to do.