- OpenAI released Symphony on May 4, 2026 — an open-source specification that turns task trackers like Linear into a command center for Codex agents.
- Internal OpenAI teams reportedly saw merged pull requests jump 6x in the first three weeks; Linear founder Karri Saarinen also reported a spike in new workspaces in the project-planning tool after the release.
- Each open ticket gets a dedicated Codex agent and workspace; agents can also create new tickets autonomously when they spot issues outside the current scope.
- The community has already shipped forks including one for Anthropic’s Claude Code with GitHub Issues; the reference implementation is in Elixir but Codex implemented the spec in TypeScript, Go, Rust, Java, and Python in stress tests.
What Happened
OpenAI released Symphony, an open-source specification with reference implementation, on May 4, 2026. Symphony turns task trackers like Linear into a command center for OpenAI’s Codex agents. Instead of developers juggling multiple Codex sessions and assigning tasks manually, agents pull open tickets themselves; humans review the results.
Why It Matters
The bottleneck framing OpenAI gives in announcing Symphony — “human attention” — captures a structural shift in agentic-AI deployment. As agents have become capable enough to handle real work in parallel, the human supervisory layer becomes the constraint, not the agents themselves. OpenAI internal data — 6x more merged pull requests in the first three weeks, with team productivity gains attributed to letting agents pull work rather than receive assignments — directly quantifies this dynamic. Symphony’s open-source release means the same architecture is available to any team running Codex agents, and the community has already adapted it for Anthropic’s Claude Code with GitHub Issues, suggesting cross-vendor adoption.
Technical Details
Symphony uses Linear as a state machine. Tickets move through statuses like “Todo,” “In Progress,” “Review,” and “Merging.” Symphony watches the board and ensures every active ticket has an agent assigned. If an agent crashes or stalls, Symphony spins it back up. Only unblocked tickets are picked up, allowing a task tree to run in parallel — for example, a React upgrade ticket only kicks off after an upstream Vite migration completes. Tickets can scope larger than a single code change: some spawn multiple pull requests across repos, and others are pure research or analysis tasks with no code at all.
Agents can create new tickets on their own when they spot performance problems, refactoring opportunities, or other issues outside the current ticket. Product managers and designers can submit feature requests directly and receive a review package with a video walkthrough — all without checking out the repo.
The reference implementation centers on a SPEC.md file and a WORKFLOW.md file. SPEC.md defines the desired behavior; WORKFLOW.md describes the development workflow handed to agents (accept ticket, check out repo, set status, attach PR, attach video). Editing the workflow file updates what agents do without code changes. The reference implementation is written in Elixir for its concurrency-process tooling. Codex generated the implementation in one shot, and the OpenAI team also had it implemented in TypeScript, Go, Rust, Java, and Python as stress tests for the spec.
Before Symphony, OpenAI developers ran several Codex sessions in parallel and chased progress on each — running more than three to five sessions simultaneously was nearly impossible without context-switching tanking productivity. The lesson OpenAI shares: “agents are hard to treat as fixed nodes in a state machine. The models keep getting better and can tackle bigger problems than the template plans for. The team now prefers to hand agents goals rather than strict processes, the way a manager gives an employee a result to deliver, not a step-by-step playbook.”
What Symphony does not handle: ambiguous problems or work calling for judgment still gets handled directly by developers in interactive Codex sessions. OpenAI does not plan to maintain Symphony as a standalone product — the company sees it as a reference. Code and specs are on GitHub.
Who’s Affected
Engineering teams running Codex agents at scale gain a deployment pattern that addresses the parallel-session-supervision bottleneck explicitly. Linear founder Karri Saarinen reports a spike in new workspaces after Symphony’s release — suggesting Linear’s growth is partly driven by Symphony adoption. Anthropic indirectly benefits: the community-built Claude Code + GitHub Issues fork shows the pattern transfers across vendors. The forward-deployed-engineer model that Anthropic’s new Blackstone JV emphasizes overlaps with Symphony’s pattern — embedding agents in customer organizations that pull from a structured backlog. Atlassian’s Jira and ClickUp face implicit pressure: as Linear becomes the canonical agent-orchestration surface for AI-native teams, alternative project-tracking tools may need to publish similar agent-friendly state-machine specifications.
What’s Next
Watch for community forks supporting Jira, ClickUp, GitHub Projects, and Notion. Symphony’s spec-driven approach — release a spec, let agents implement it across multiple languages — is itself a reference for how AI labs may distribute future tools. OpenAI has flagged ChatGPT workspace agents (rolled out in mid-April) as a related project; the convergence between workspace agents and Symphony-managed Codex agents will likely produce a more integrated agent-orchestration product surface from OpenAI in the coming months.