An open-source command-line tool called pls converts plain-English descriptions into executable shell commands using locally-running AI models through Ollama. The project is publicly available on GitHub under the pls-ai organization. Author details were not available at time of publication.
- pls runs entirely on the user’s machine via Ollama — no data is transmitted to external servers at any point during operation
- The tool auto-detects the active shell and operating system, generating syntactically correct commands tailored to the user’s specific environment
- Destructive operations — specifically
rm,format, anddd— require an explicit additional confirmation step before execution - Command history is stored and searchable locally, enabling reuse of previous generations without re-querying the model
What Happened
pls accepts a natural-language request — for example, “find all Python files modified in the last week” — generates the matching shell command, displays it in full for the user to review, and executes it upon confirmation. The project is hosted at github.com/pls-ai/pls and operates without any internet connection after the initial model is downloaded through Ollama. Unlike GitHub Copilot CLI and Warp AI, which route every query through external APIs, pls performs all inference on the user’s local hardware.
Why It Matters
Cloud-based AI terminal assistants expose operational context — directory structures, filenames, command patterns, and task intent — to third-party infrastructure on every query. For organizations subject to data handling regulations, or for users on networks without external connectivity, this has made adoption of cloud-dependent tools impractical regardless of their capability.
The past two years have produced a wave of AI-powered CLI assistants, most of which depend on API calls to hosted services. Ollama, which packages model inference into a locally-running server process, has become the standard local inference layer for developer tools seeking to replicate this functionality without cloud dependencies. pls applies that pattern specifically to shell command generation — a use case where query volume is high and the sensitivity of the queries (exposing what a user is doing on their machine) is a meaningful concern.
Technical Details
pls uses Ollama as its sole inference backend and is compatible with any model available in the Ollama library. The default configuration targets smaller models optimized for instruction-following. For the most common use case — translating a clear, single-intent description into a shell command — smaller local models are sufficient; users can override the default and specify a larger model when handling more complex or multi-step queries.
Prompt engineering is handled internally. Rather than passing raw user input directly to the model, pls first detects the active shell and operating system, then constructs a context-aware prompt. This distinction matters in practice: commands generated for Linux using GNU coreutils may not function correctly on macOS, which ships BSD variants of utilities like find and date with different flag syntax. By embedding environment context at prompt construction time, the tool produces output matched to the user’s actual runtime rather than a generic platform assumption.
A dry-run mode allows full preview of generated commands without triggering execution. The safety layer explicitly identifies and flags destructive operations — rm, format, and dd — requiring a separate confirmation before any of these proceed. All generated commands are logged locally in a searchable history file.
Who’s Affected
Developers who work regularly with utilities that have dense or non-intuitive option sets are the primary target group. The project specifically identifies find, awk, sed, and ffmpeg as representative pain points — tools where recalling exact flag combinations for a given task typically requires consulting man pages or running an external search.
System administrators in government, healthcare, financial services, and other environments where policy restricts cloud tool usage represent a distinct use case. Teams managing air-gapped development environments, where internet access is structurally unavailable, can use pls without any architectural exceptions. Individual developers who prefer not to route terminal activity and command history through third-party infrastructure form a third group.
What’s Next
As an open-source project, pls can be extended by contributors to add shell support, modify default model configurations, or introduce output validation to reduce malformed command generation on edge-case queries. The project’s public roadmap was not documented at time of publication.
The primary limitation is model capability: locally-run models are generally less capable than large-scale hosted alternatives, and performance on ambiguous, multi-step, or highly context-dependent queries may be inconsistent compared to cloud tools. The GitHub repository is the current venue for tracking development and submitting contributions.