What is Merit Analyzer?
Merit Analyzer is a tool that helps you understand test failures at scale. It:- Clusters similar errors - Groups related failures together
- Identifies patterns - Finds common error patterns
- Locates problematic code - Points to likely root causes
- Generates insights - Provides actionable recommendations
When to Use It
Use Merit Analyzer when you have:- ✅ Many test failures to understand
- ✅ Recurring error patterns
- ✅ Need to prioritize fixes
- ✅ Want to identify systemic issues
How It Works
- Export test results - Run tests and export to CSV
- Run analyzer - Process failures with LLM
- Review report - Interactive HTML with insights
Quick Example
What You Get
The analyzer generates an interactive HTML report with:Error Clusters
Similar errors are grouped together:- Cluster name and pattern
- Number of affected tests
- Common characteristics
Code Analysis
For each cluster:- Likely problematic code locations
- Suggested fixes
- Related test cases
Interactive Exploration
- Clickable file:// URLs to code
- Filterable test results
- Detailed error messages
- Pattern visualizations
Example Report Structure
Supported Data Format
Merit Analyzer requires a CSV file with these columns:| Column | Type | Description |
|---|---|---|
case_input | any | Test input/prompt |
reference_value | any | Expected output |
output_for_assertions | any | Actual output |
passed | bool | Test passed (true/false) |
error_message | str | Error message if failed |
Requirements
Merit Analyzer uses LLMs for clustering and analysis. You need:- API key from OpenAI or Anthropic
- Environment variables configured
- CSV file with test results