Comprehension Debt Measurement Tool for AI-Assisted Codebases
Five independent research groups identified the same crisis in early 2026: AI agents generate code 5-7x faster than humans can understand it. An Anthropic study found AI-assisted developers scored 17% lower on comprehension quizzes. No existing dev tool measures whether teams actually understand their own codebase. The concept went viral on HN with 500+ upvotes.
Don't build another code complexity scanner. The insight is that comprehension is a TEAM property, not a code property. Integrate with incident response data (did the on-call engineer need AI help to debug?), PR review patterns (are reviewers rubber-stamping?), and onboarding metrics (can new hires explain system behavior?). The data sources already exist in most orgs.
landscape (3 existing solutions)
Every existing code quality tool measures properties of the code itself. Zero tools measure whether the humans responsible for the code actually understand it. The proposed metrics (time-to-root-cause, unassisted debugging rate, onboarding depth) exist as concepts but no product implements them.