A developer publishing under the handle “theswerd” — whose domain, swerdlow.dev, suggests the surname Swerdlow — released a public guide and manifesto in 2026 titled AI Code, establishing a structural framework for maintaining code quality as AI agents generate increasing proportions of production software. The guide is available at aicode.swerdlow.dev and is packaged as an installable agent skill with native support for the Cursor code editor. A confirmed full author name was not available at time of publication.
- The guide introduces a two-tier function architecture — “semantic” and “pragmatic” — as a structural discipline for code written by AI agents.
- Semantic functions must be minimal, input-output explicit, free of unintended side effects, and self-documenting without inline comments.
- Data models should structurally prevent invalid states, eliminating the per-call overhead of checking optional or loosely typed fields across a codebase.
- The guide is deployable as an agent skill via
npx skills add theswerd/aicodeand integrates directly with Cursor.
What Happened
In 2026, a developer known online as “theswerd” published AI Code, a public guide at aicode.swerdlow.dev that lays out concrete conventions for how AI coding agents should write and structure code. The document frames itself as both a manifesto and a practical reference for human developers working alongside AI agents. Its opening argument: “The only thing that sloppifies a codebase faster than 1 coding agent is a swarm of them.”
The guide distinguishes itself from general-purpose style guides by being deployable as an agent skill — meaning the conventions it describes can be fed directly to AI agents as behavioral instructions, not just communicated to human developers who then supervise those agents.
Why It Matters
AI coding tools that generate and modify code autonomously are now common in professional development environments. When multiple agents operate concurrently in the same codebase without shared structural conventions, inconsistencies in function design, naming, and data handling accumulate at a rate that outpaces typical code review capacity.
This guide addresses that architectural problem directly, treating codebase structure — rather than model behavior or prompt design — as the primary variable developers can control. The framing shifts responsibility from what AI agents are capable of producing to what constraints humans impose on them before work begins.
Technical Details
The guide defines two function categories with distinct roles. Semantic functions are described as “the building blocks of any codebase”: each should be as minimal as possible, take in all required inputs, and return all necessary outputs directly. Side effects are explicitly undesirable unless they are the function’s stated purpose, so that semantic functions remain safe to reuse without callers needing to understand their internals. Examples span a deliberately wide range — from quadratic_formula() to retry_with_exponential_backoff_and_run_y_in_between<Y: func, X: Func>(x: X, y: Y) — to show that the principle applies regardless of complexity. The guide states semantic functions should be “extremely unit testable” and require no surrounding comments; the function signature and name are expected to serve as the complete definition.
Pragmatic functions are higher-level wrappers that contain the messy, process-specific logic of production systems. The guide gives examples including provision_new_workspace_for_github_repo(repo, user) and handle_user_signup_webhook(). These are described as functions that “are expected to change completely over time,” and the guide recommends doc comments that surface non-obvious behaviors — such as “fails early on balance less than 10” — rather than restating what the function name already implies. Readers are explicitly advised to treat those comments skeptically, since developers modifying the function may not update them.
On data modeling, the guide states: “The shape of your data should make wrong states impossible.” It argues that models permitting invalid field combinations create cascading maintenance costs: “every optional field is a question the rest of the codebase has to answer every time it touches that data, and every loosely typed field is an invitation for callers to pass something that looks right but isn’t.”
Who’s Affected
The guide is directly relevant to engineering teams using agentic AI coding tools, particularly those on Cursor, which is explicitly supported via the skill integration. Teams running multiple AI agents concurrently — or building automated development pipelines — are the primary audience, since the structural drift the guide addresses scales with the number of agents involved.
The installable skill format means architectural standards can be enforced at the agent level rather than relying solely on human code review. Individual developers using AI pair-programming tools can also apply the framework by following its conventions manually when reviewing or directing AI output.
What’s Next
The guide can be integrated into AI agents using npx skills add theswerd/aicode, with Cursor support available out of the box. The source material was partially truncated at time of fetching — the data modeling section ends mid-sentence in the available text — so the full scope of the guide’s recommendations on type enforcement and model design may extend beyond what is covered here.
No empirical benchmarks or case studies accompany the guide; its recommendations rest on stated principles and illustrative code examples. Whether adoption produces measurable improvements in codebase maintainability under AI-agent workloads has not been assessed as of publication.