Claude Introduces Safer “Auto Mode” That Lets Its Coding AI Work Independently
In a move aimed at making AI-assisted software development smoother and safer, Anthropic has introduced an “auto mode” for its coding assistant, Claude Code. The new feature allows the AI to carry out tasks on its own without repeatedly asking users for approval — a shift designed to reduce friction for developers who rely on AI for everyday coding work.
The update is especially relevant for programmers who use AI for what many now call “vibe coding” — writing and refining code with AI collaboration. Until now, users often had to manually approve a series of routine actions, interrupting workflow and slowing productivity.
Previously, Claude Code offered a command called “dangerously-skip-permissions,” which removed most approval prompts but came with serious risks. Without oversight, the AI could potentially perform destructive actions such as deleting files or executing unsafe commands. Anthropic’s new “auto mode” is designed as a more balanced alternative.
According to the company, auto mode enables Claude Code to make permission decisions independently while still applying protective checks. Before executing any task, the system runs an AI-powered classifier that evaluates the level of risk involved.
If a command appears potentially harmful — such as mass file deletion, exposure of sensitive data, or execution of suspicious code — the system blocks the action. Tasks considered safe proceed automatically. When risk is uncertain or elevated, the AI pauses and requests user approval.
This approach allows developers to maintain control over sensitive operations while letting routine tasks run uninterrupted in the background. The goal is to streamline workflows without sacrificing safety.
The launch comes as competition intensifies in the fast-growing market for autonomous development tools. Major technology firms including GitHub and OpenAI have also introduced coding assistants capable of executing tasks on behalf of users. Anthropic’s positioning, however, emphasizes decision-making autonomy paired with built-in safeguards.
For now, auto mode is being released as a research preview for Claude Team subscribers. Broader availability is planned for Enterprise customers and API users in the near future.
The feature currently supports Claude Sonnet 4.6 and Opus 4.6 models. Users can switch the mode on or off through multiple interfaces, including the desktop application, command-line tools, and the Visual Studio Code extension.
Anthropic acknowledges that no automated safeguard is perfect. The classifier may sometimes flag harmless actions or, in rare cases, allow risky ones when context is unclear. Because of this, the company recommends using auto mode within isolated or sandboxed environments to limit potential damage.
The feature is part of a broader push by Anthropic to position Claude as more than just a chatbot. Recent additions include Claude Code Review, an automated code reviewer, and Dispatch for Cowork, a coordination tool for collaborative workflows.
Claude can also be granted full system access, enabling it to perform tasks even when users are away from their computers. With these updates, Anthropic is presenting Claude not simply as software, but as a capable remote coworker designed to assist across the development lifecycle.