A new open-source command-line tool called pls translates natural language descriptions into shell commands using locally-running AI models through Ollama. Users describe what they want in plain English — “find all Python files modified in the last week” — and pls generates the corresponding shell command, displays it for confirmation, and optionally executes it.
The distinguishing feature is fully offline operation. Unlike GitHub Copilot CLI, Warp AI, or other cloud-dependent alternatives, pls runs entirely on the user’s machine with no data sent to external servers. This makes it suitable for environments with strict data policies, air-gapped networks, or users who prefer not to send their command history to third-party services.
pls uses Ollama as its inference backend, supporting any model available in the Ollama library. The default configuration uses smaller models optimized for instruction-following, but users can specify larger models for more complex command generation. The tool handles the prompt engineering internally, translating user intent into system-appropriate commands based on the detected shell and operating system.
The tool includes safety features: generated commands are displayed before execution, destructive operations (rm, format, dd) require explicit confirmation, and a dry-run mode shows what would execute without running it. Command history is stored locally for reference and can be searched to reuse previous generations.
pls joins a growing category of AI-powered terminal tools that lower the barrier to effective command-line use. For developers who know what they want to accomplish but cannot recall the exact syntax for complex commands involving find, awk, sed, or ffmpeg, the tool eliminates the trip to Stack Overflow or man pages. The offline requirement limits model capability compared to cloud alternatives, but for the most common use case — translating a clear intent into correct syntax — local models are sufficient.
