Skip to main content

Overview

Merit uses environment variables for different features. Here’s what you need for each:
FeatureRequired VariablesGet Access
Basic TestingNoneBuilt-in
AI PredicatesMERIT_API_KEYContact us
Error AnalyzerYour own LLM keysDirect from provider

AI Predicates (LLM-as-a-Judge)

AI predicates (has_facts, has_topics, etc.) call the Merit cloud service.

Required

.env
MERIT_API_BASE_URL=https://api.appmerit.com
MERIT_API_KEY=your_merit_api_key_here

Optional

.env
# Enable debugging mode (includes reasoning in responses)
MERIT_DEBUGGING_MODE=true

# Advanced: Customize connection pooling
MERIT_API_KEEPALIVE_EXPIRY=30

Get API Access

Contact us to get your Merit API key.

Error Analyzer (Bring Your Own Key)

Merit Analyzer uses your LLM provider credentials directly. Currently supports Anthropic models via direct API or AWS Bedrock.

Anthropic (Direct)

.env
ANTHROPIC_API_KEY=sk-ant-...
MODEL_VENDOR=anthropic
INFERENCE_VENDOR=anthropic
Get your key: console.anthropic.com

AWS Bedrock (Anthropic)

.env
MODEL_VENDOR=anthropic
INFERENCE_VENDOR=aws
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1
Configure AWS credentials: AWS Documentation

Loading Environment Variables

Merit automatically loads .env files from:
  1. Current working directory
  2. Project root (if different)

.env File

Create a .env file in your project root:
.env
# AI Predicates
MERIT_API_BASE_URL=https://api.appmerit.com
MERIT_API_KEY=your_merit_api_key_here

# Error Analyzer (choose one provider)
ANTHROPIC_API_KEY=sk-ant-...
MODEL_VENDOR=anthropic
INFERENCE_VENDOR=anthropic

Shell Export

Or export directly in your shell:
export MERIT_API_KEY=your_merit_api_key_here
export MERIT_API_BASE_URL=https://api.appmerit.com

CI/CD

Store as secrets in your CI/CD platform: GitHub Actions:
env:
  MERIT_API_KEY: ${{ secrets.MERIT_API_KEY }}
  MERIT_API_BASE_URL: https://api.appmerit.com
GitLab CI:
variables:
  MERIT_API_BASE_URL: "https://api.appmerit.com"
  MERIT_API_KEY: $MERIT_API_KEY  # From CI/CD variables

Quick Reference

I want to use AI predicates in my tests

.env
MERIT_API_BASE_URL=https://api.appmerit.com
MERIT_API_KEY=your_merit_api_key_here
Get API key →

I want to run the error analyzer

ANTHROPIC_API_KEY=sk-ant-...
MODEL_VENDOR=anthropic
INFERENCE_VENDOR=anthropic

I want both features

.env
# For AI predicates
MERIT_API_BASE_URL=https://api.appmerit.com
MERIT_API_KEY=your_merit_api_key_here

# For error analyzer
ANTHROPIC_API_KEY=sk-ant-...
MODEL_VENDOR=anthropic
INFERENCE_VENDOR=anthropic

Troubleshooting

”MERIT_API_KEY not configured”

You need Merit API credentials to use AI predicates:
  1. Contact us to get an API key
  2. Add to .env file:
    MERIT_API_KEY=your_key_here
    MERIT_API_BASE_URL=https://api.appmerit.com
    

“API key not configured” (Analyzer)

The analyzer needs LLM provider credentials:
  1. Get API key from your provider (OpenAI, Anthropic, etc.)
  2. Add to .env file:
    OPENAI_API_KEY=sk-...
    MODEL_VENDOR=openai
    INFERENCE_VENDOR=openai
    

Variables not loading

  • ✅ Check .env file is in project root
  • ✅ Check file is named exactly .env (not .env.txt)
  • ✅ Restart terminal/IDE after creating .env
  • ✅ Check for typos in variable names

Security Best Practices

Never commit API keys to version control!

Add to .gitignore

.gitignore
.env
.env.local
.env.*.local

Use Different Keys Per Environment

# Development
.env

# Production
.env.production

# CI/CD
# Store in CI/CD secrets, not in files

Rotate Keys Regularly

  • Change keys periodically
  • Revoke compromised keys immediately
  • Use separate keys for dev/staging/production

Next Steps