Report Structure
Merit Analyzer generates an interactive HTML report with three main sections:- Summary - High-level overview
- Error Clusters - Grouped failures
- Test Details - Individual test results
Summary Section
The top of the report shows:Error Clusters
Each cluster represents a group of similar failures:Cluster Header
- Name: Descriptive cluster name
- Count: Number of tests in this cluster
- Pattern: Common error pattern (regex)
Problematic Code
For each cluster, the analyzer identifies likely root causes:- File and line numbers - Where the issue likely originates
- Issue description - What’s probably wrong
- Recommendations - How to fix it
Affected Tests
List of tests in this cluster:Example Clusters
Cluster: “Hallucination in Summaries”
Cluster: “JSON Parsing Failures”
Interactive Features
Clickable Links
File paths are clickable file:// URLs:Filtering (TODO)
TODO: Document filtering controls once implemented Expected features:- Filter by cluster
- Filter by test file
- Search test names
- Show/hide passed tests
Sorting (TODO)
TODO: Document sorting options Expected options:- Sort by failure count
- Sort by severity
- Sort by file location
- Sort by test name
Using the Report
1. Start with High-Impact Clusters
Focus on clusters with most failures:2. Review Recommendations
Each cluster has specific, actionable recommendations:3. Click Through to Code
Use file:// links to examine problematic code:4. Verify Fixes
After fixing:- Re-run failed tests
- Generate new report
- Verify cluster is resolved
Report Best Practices
Share with Team
The HTML report is self-contained:Track Over Time
Generate reports regularly:Include in CI/CD
Upload reports as artifacts:Example Workflow
- Tests fail - CI run fails with 25 errors
- Export results - Save results.csv
- Run analyzer -
merit-analyzer analyze results.csv - Review report - Open merit_report.html
- Identify patterns - See 5 error clusters
- Fix root cause - Address top cluster (12 failures)
- Re-run tests - Verify fixes
- Repeat - Handle remaining clusters