What is Tracing?
Merit includes OpenTelemetry tracing to help you:- Debug complex test flows
- Track LLM calls and costs
- Identify performance bottlenecks
- Understand test execution paths
Enable Tracing
Run tests with tracing enabled:traces.json at the end of the run.
You can also initialize tracing programmatically:
Custom Service Name
Identify your tests with a custom service name:@sut Decorator
Mark your system under test for automatic tracing:- Functions (sync and async)
- Classes (traces
__call__method) - Resources
Class-Based SUT
Async SUT
Trace Steps
Add custom trace spans to track specific operations:Automatic Instrumentation
Merit automatically traces:- LLM calls - OpenAI and Anthropic API calls
- Test execution - Test start, duration, results
- Predicate evaluations - AI assertion calls
What Gets Tracked
LLM Calls
For each LLM API call, Merit records:- Model name
- Token counts (input/output)
- Cost estimates
- Latency
- Success/failure status
Test Execution
For each test, Merit records:- Test name and location
- Start time and duration
- Pass/fail status
- Error messages (if failed)
AI Predicates
For each predicate call, Merit records:- Predicate type (
has_facts,has_topics, etc.) - Input/reference text lengths
- API latency
- Result (passed/failed)
- Confidence score
Example with Tracing
Viewing Traces (TODO)
TODO: Document trace viewing and export options Expected integrations:- Export to Jaeger
- Export to Zipkin
- Console output
- JSON export for analysis
Cost Tracking
Use tracing to track LLM costs:- Total tokens used
- Estimated cost per test
- Most expensive operations
Performance Analysis
Identify slow operations:Disabling Tracing
Tracing has minimal overhead, but you can disable it:Environment Configuration (TODO)
TODO: Document environment variables for tracing Expected configuration:Integration with Observability Platforms (TODO)
TODO: Document integration with observability tools Expected integrations:- Jaeger
- Zipkin
- Honeycomb
- DataDog
- New Relic