LLM Prompt Regression Testing Tool for CI/CD Pipelines
Teams shipping LLM features are testing them less rigorously than login forms. A prompt tweak that fixes one issue silently breaks another, and broken prompts return HTTP 200 while content goes subtly wrong. Promptfoo leads but just got acquired by OpenAI (March 2026), creating uncertainty. DeepEval and LangWatch exist but CI/CD integration is still awkward. Developers need prompt testing that feels like unit testing.
Promptfoo's acquisition by OpenAI is your opening. Build the vendor-neutral, MIT-licensed alternative. The key insight: most teams don't need 50 evaluation metrics. They need 3 things: does the output match expected format, does it contain the right entities, and did quality regress from the last version. Ship a YAML config, a CLI command, and a GitHub Action. Nothing else.
landscape (4 existing solutions)
LLM evaluation tools are maturing fast but they're designed for ML teams running dedicated eval suites, not for product engineers who added one LLM feature to their otherwise traditional app. Promptfoo's OpenAI acquisition creates a vacuum for an independent, lightweight prompt regression tool. The gap is 'pytest for prompts': define expected behaviors, run against prompt changes, fail the PR if quality drops.