Anthropic’s Claude Code and OpenAI’s Codex were supposed to make software development easier. Instead, according to a Bloomberg investigation published February 26, the tools have triggered what engineers are calling a “productivity panic” — a high-pressure environment where managers equate more AI interactions with better output, and developers are burning out trying to keep up.
The concept of “vibe coding,” coined by former OpenAI researcher Andrej Karpathy in February 2025, promised a future where engineers could build software simply by chatting with AI models. A year later, the reality looks different. Companies have begun tracking metrics like “interactions per day” with coding agents, using them as proxies for productivity. Intuit’s CTO reported engineers are 30 percent more productive as measured by code velocity — but engineers themselves say the measurement misses what matters.
The core problem is a familiar one: management practices that prioritize visible inputs over meaningful outcomes. AI coding agents compress development time and expand the scope of what a single engineer can attempt, but that expansion comes with cognitive overhead. Engineers report spending significant time reviewing, debugging, and correcting AI-generated code — work that doesn’t show up in interaction counts. When every engineer on a team has access to the same tools, the competitive advantage they provide erodes quickly, leaving only the intensified pace.
Claude Code became the focal point of industry discussion in January 2026, following a significant update in late 2025 that expanded its context window and coding capabilities. OpenAI’s ChatGPT Codex received a similar update around the same time. Other tools in the space — including Google’s Gemini and DeepSeek — have added to the acceleration. The result is a market where AI-assisted development is no longer optional but expected, and the bar for individual output keeps rising.
The situation exposes a tension that predates AI tools entirely. Organizations with strong engineering cultures report that Claude Code and similar agents genuinely help — researchers and applied social scientists have adopted them for tasks outside traditional software development. But in organizations where management already struggled to evaluate engineering work, AI tools have become surveillance mechanisms with a productivity veneer.
Several engineering leaders have pushed back publicly, arguing that the “panic” is manufactured by executives who don’t understand the tools they’re mandating. The real question isn’t whether AI coding agents boost productivity — by most accounts, they do — but whether organizations will use that boost to improve working conditions or simply demand more output from fewer people. Early indicators suggest the latter is winning.
