Claude Code didn't get worse. The harness did. And that ends one of the most common AI complaints of 2026.
The complaint that Claude Code feels worse is no longer credible without evidence, as Anthropic's postmortem reveals that the user experience of an LLM product can degrade due to changes in the harness, not the model weights. Three regressions were identified, all in the layer treated as boring infrastructure, including a caching bug, a change in default reasoning effort, and a brevity instruction. These issues were fixed by April 20, and the incident sets a transparency precedent for other labs. Engineers should now focus on showing harness diffs, not model cards, when addressing complaints about AI product performance.