What are AI Predicates?
AI Predicates are special async assertions that use LLMs to evaluate complex criteria that traditional assertions can’t handle:- ✅ Semantic fact checking - Not just string matching
- ✅ Hallucination detection - Find unsupported claims
- ✅ Style and structure matching - Beyond regex
- ✅ Policy compliance - Natural language rules
Quick Example
Available Predicates
Fact Checking
has_facts- Check if text contains specific factshas_unsupported_facts- Detect hallucinationshas_conflicting_facts- Find contradictionsmatches_facts- Bidirectional fact matching
Topics & Policy
has_topics- Check topical coveragefollows_policy- Verify policy compliance
Style & Structure
matches_writing_style- Check writing style consistencymatches_writing_layout- Verify document structure
Setup
AI Predicates call the Merit cloud service for evaluation. You need to configure:- Merit API credentials (for predicate evaluation)
- LLM provider credentials (used by the Merit service)
Merit API Configuration
Set these environment variables:.env
Get API Access
Contact us to get your Merit API key. Merit automatically loads.env files from your project directory.
How it works: You call Merit’s API with your API key. The Merit service handles all LLM provider interactions internally - you don’t need your own OpenAI/Anthropic keys for predicates.
For Error Analyzer (Different Setup)
If you’re using Merit Analyzer (not predicates), see the Analyzer documentation - it uses your own LLM credentials (bring your own key).How It Works
When you call an AI predicate:- Request - Merit sends the actual and reference text to the Merit API
- Evaluation - An LLM evaluates based on the predicate type
- Response - Returns a boolean result with confidence score
- Assertion - Use in standard Python
assertstatements
Predicate Results
Predicates return boolean values for assertions, but you can access detailed results:When used in assertions, predicates return simple boolean values. Access
PredicateResult objects directly for detailed information.Async Context
All AI predicates are async and must be called withawait:
Cost Considerations
AI predicates make LLM API calls:- ~1-2 cents per assertion (depending on provider and model)
- Use strategically for complex checks
- Regular assertions are free and instant
Combining Predicates
Use multiple predicates together:Error Handling
TODO: Document error handling for API failures Expected behavior:Performance Tips
- Cache results - Don’t re-evaluate the same text
- Batch tests - Run multiple tests together
- Use strict mode wisely - Lenient mode is faster
- Regular assertions first - Filter obvious failures cheaply