LAUNCHES

Developer Builds Plain-Text Cognitive Architecture That Gives Claude Code Persistent Memory

M megaone_admin Mar 26, 2026 2 min read
Engine Score 7/10 — Important

This story introduces a novel plain-text cognitive architecture specifically for Claude, offering actionable insights for developers. While its industry impact is focused on the Claude ecosystem, its freshness and direct source make it a valuable update for that community.

Editorial illustration for: Developer Builds Plain-Text Cognitive Architecture That Gives Claude Code Persistent Memory

A developer named Marcio Puga released Cog on March 24, 2026, an open-source system that gives Claude Code persistent memory across sessions using nothing but plain text files and standard Unix tools. The project, which accumulated 126 points and 42 comments on Hacker News, addresses one of the most common frustrations with AI coding agents: they forget everything between sessions, repeating file reads, rediscovering architecture decisions, and losing context about ongoing work.

Cog works by defining a set of conventions — rules written in Markdown files — that tell Claude Code how to store and retrieve information using grep, find, and git diff. There is no database, no external service, and no custom code. The architecture specifies how memory is structured in the file system, how queries route to the right context, and how automated workflows like reflection and housekeeping maintain the memory over time. A reflection pass runs twice daily to consolidate and prune stored knowledge.

The practical impact is substantial. Users of similar cognitive architecture approaches report 64 to 95 percent token reduction depending on codebase size, because the agent spends fewer tokens re-reading files it has already analyzed. Some developers reported three to five times productivity gains on complex projects by running parallel Claude Code instances that share state through the file system. The system supports up to eight concurrent Claude Code instances sharing the same memory.

The project reflects a broader shift in how developers use AI coding agents. By early 2026, developers were reportedly running Claude Code continuously, leading Anthropic to introduce weekly usage limits. The shift from interactive coding assistant to autonomous development partner requires the agent to maintain context over hours or days — something the base Claude Code architecture does not support. Cog fills that gap with what amounts to a working memory layer built entirely from text files.

Similar concepts have emerged across the AI coding ecosystem. GitLab is developing an agent orchestration platform, and the term cognitive architect — coined by GitLab’s VP of Strategy Emilio Salvador — describes the evolving role of developers who design the rules and structures that guide AI agents rather than writing code directly. The pattern is consistent: as AI agents become more capable, the human role shifts from implementation to architecture.

Cog’s minimalism is its strongest argument. Claude Code’s prompt architecture already consists of six layers — system prompt, tool definitions, runtime instructions, project context, conversation history, and user input. Adding a memory layer through plain text files rather than a custom framework means zero additional dependencies, zero infrastructure costs, and full transparency into what the agent remembers and why.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy