TOOL UPDATES

Anthropic Adds Auto Mode to Claude Code, Letting AI Choose Its Own Permissions

M megaone_admin Mar 26, 2026 2 min read
Engine Score 7/10 — Important

This update from Anthropic introduces a significant 'auto mode' for Claude Code, enhancing its agentic capabilities and offering high actionability for developers. It represents a notable step in AI autonomy, impacting many users and setting potential industry trends.

Editorial illustration for: Anthropic Adds Auto Mode to Claude Code, Letting AI Choose Its Own Permissions

Anthropic has introduced an auto mode for Claude Code that allows the AI coding agent to autonomously decide which actions require user permission and which can proceed without approval. The feature, announced on March 24, 2026, uses an AI-powered classifier to evaluate each action before execution, automatically allowing safe operations while blocking potentially dangerous ones like mass file deletion, sensitive data access, or malicious code execution.

Auto mode addresses what developers call the “permission tax” — the friction of repeatedly approving routine operations during long coding sessions. In Claude Code’s default mode, the agent asks for permission before each file write, terminal command, or system interaction. For complex tasks that involve hundreds of such operations, the constant approval flow can turn a productive coding session into a click-through exercise.

The previous alternative was the dangerously-skip-permissions flag, which bypassed all safety checks entirely. Auto mode provides a middle path: the classifier evaluates each action against a safety model, approving routine operations like writing test files or running build commands while flagging actions that could cause irreversible damage. If Claude repeatedly attempts a blocked action, it eventually prompts the user for explicit permission rather than silently failing.

The feature is available as a research preview for Claude Team users, with Enterprise and API rollout planned for the coming days. It works with both Claude Sonnet 4.6 and Opus 4.6 models. Anthropic acknowledges that the classifier is not perfect — it may occasionally allow risky actions when user intent is ambiguous or block benign operations that appear suspicious out of context. The company recommends using auto mode in isolated or sandboxed environments.

The launch reflects a broader industry pattern. As AI coding agents become more capable of sustained, autonomous work — running for hours, modifying dozens of files, executing complex build and test cycles — the permission model needs to evolve beyond binary approve-everything or approve-nothing options. Microsoft’s Copilot, Cursor’s Composer, and OpenAI’s Codex are all navigating similar tradeoffs between developer productivity and code safety.

Auto mode ships alongside other recent Claude Code expansions including Claude Code Review for automated PR reviews, a one-million-token context window, and Dispatch for Cowork which allows mobile task assignment. Claude Code’s annualized revenue reached $2.5 billion by February 2026, making the development tool Anthropic’s fastest-growing revenue line and the product where permission UX has the most direct impact on customer retention.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy