- Anthropic announced “dreaming” for Claude Managed Agents on May 6, 2026 at its Code with Claude event.
- The feature schedules time for agents to reflect on past interactions and updates memory to shape future behavior — surfacing recurring mistakes, converging workflows, and shared team preferences.
- Dreaming can automatically update agent memories or the developer can manually approve incoming changes.
- Available in research preview; developers must request access through the Claude Platform.
What Happened
Anthropic announced “dreaming” for Claude Managed Agents on May 6, 2026 at its Code with Claude event. The feature, which builds on the existing Managed Agents memory capability launched April 8, 2026, lets agents “self-improve” by reviewing past sessions for patterns. Anthropic product team members demonstrated the feature during the keynote, referring to completed memory-update runs as finished “dreams.”
Why It Matters
Self-improving agents have been a long-running aspiration in the agentic-AI category, with most implementations to date relying on either fine-tuning loops (slow, expensive) or prompt-engineering accumulation (brittle). Anthropic’s framing of “dreaming” as a scheduled memory-restructuring process attempts a middle path: lightweight enough to run automatically, structured enough to surface patterns a single agent run cannot see. The approach is particularly relevant for long-running work and multi-agent orchestration where pattern detection across sessions is the binding constraint.
Technical Details
Anthropic’s stated framing: “Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team. It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration.”
The feature operates as scheduled reflection rather than always-on. Once dreaming is enabled, the system can either automatically update agent memories to shape future behavior, or developers can select which incoming changes to approve. The latter is the safer mode for production deployments where automatic memory drift could cause regressions.
Dreaming sits alongside two other expanded Managed Agents features announced at Code with Claude: outcomes (which keep agents on-task) and multi-agent orchestration (which handles delegating to other agents). The combined update is positioned as ensuring “agents stay accurate and are constantly learning.”
The naming choice — “dreaming” — continues Anthropic’s pattern of anthropomorphizing its products. ZDNet’s coverage notes Anthropic published a Constitution for Claude in January 2026, with language suggesting preparation for Claude developing consciousness. In April 2026 the company mapped Claude’s morality across 300,000+ anonymized conversations. In August 2025 Anthropic launched a feature letting Claude end toxic conversations for “its own well-being” — not as user-safety intervention. Last month, Anthropic researchers investigated Claude Sonnet 4.5’s neural network for signs of emotion including desperation and anger. When Anthropic retired Opus 3 in January, the company set it up with a Substack to blog post-retirement. The “dreaming” naming sits inside that documented anthropomorphization arc.
Who’s Affected
Developers building production agents on the Claude Platform gain a structured path to long-term memory refinement without managing fine-tuning loops manually. Teams running multi-agent systems gain pattern-detection across agent populations. Competing agentic-AI platforms — OpenAI’s Codex with Symphony orchestration (covered earlier this week), Google‘s Gemini Agent (announced same week), Meta’s Autodata — face a Claude-side pattern they will likely adapt with their own naming. AI-safety researchers gain a new artifact to study: how agent memory restructuring under “dreaming” affects long-horizon behavior, particularly whether dreaming amplifies or corrects subtle alignment drift.
What’s Next
Dreaming is in research preview; developers must request access. Anthropic typically promotes features from research preview to general availability over 2-4 month windows. Watch for case studies from Anthropic’s Code with Claude attendees showing concrete productivity outcomes from dreaming-enabled agent populations. The deeper open question: whether the “dreaming” naming becomes industry shorthand for scheduled memory-refinement features, or whether other labs deliberately use less-anthropomorphic naming to differentiate.