Decorator
@metric
Register a metric resource that yields a Metric instance and optionally a final value. Signature:| Name | Type | Default | Description |
|---|---|---|---|
fn | Callable | None | None | Generator or async generator function |
scope | Scope | str | Scope.SESSION | Lifecycle scope: "case", "suite", or "session" |
- Must
yieldaMetricinstance first (this gets injected) - Optionally
yielda final calculated value (becomesMetricResult.value) - Can assert on metric properties after all data is collected
Classes
Metric
Thread-safe class for recording data and computing statistical properties. Attributes:| Name | Type | Description |
|---|---|---|
name | str | None | Metric name (auto-set by decorator) |
metadata | MetricMetadata | Collection metadata (scope, contributors, timestamps) |
| Method | Parameters | Description |
|---|---|---|
add_record | value: int | float | bool | list[int | float | bool] | tuple[int | float | bool, ...] | Record one or more data points (numeric/bool only) |
add_record(...) (manually) or when you use with metrics(...) to record assertion pass/fail outcomes.
| Property | Type | Description |
|---|---|---|
raw_values | list[int | float | bool] | All recorded values |
len | int | Number of values |
sum | float | Sum of all values |
min | float | Minimum value |
max | float | Maximum value |
mean | float | Arithmetic mean |
median | float | Median (50th percentile) |
std | float | Sample standard deviation |
variance | float | Sample variance |
pstd | float | Population standard deviation |
pvariance | float | Population variance |
p25 | float | 25th percentile |
p50 | float | 50th percentile (median) |
p75 | float | 75th percentile |
p90 | float | 90th percentile |
p95 | float | 95th percentile |
p99 | float | 99th percentile |
percentiles | list[float] | All 99 percentiles (p1 to p99) |
ci_90 | tuple[float, float] | 90% confidence interval (lower, upper) |
ci_95 | tuple[float, float] | 95% confidence interval |
ci_99 | tuple[float, float] | 99% confidence interval |
counter | Counter[int | float | bool] | Frequency count of each unique value |
distribution | dict[int | float | bool, float] | Share of each unique value (0-1) |
MetricMetadata
Metadata tracking metric lifecycle and contributors. Attributes:| Name | Type | Description |
|---|---|---|
last_item_recorded_at | datetime | None | Timestamp of most recent value |
first_item_recorded_at | datetime | None | Timestamp of first value |
scope | Scope | Lifecycle scope (SESSION, SUITE, CASE) |
collected_from_merits | set[str] | Names of merit functions that contributed |
collected_from_resources | set[str] | Names of resources that contributed |
collected_from_cases | set[str] | Case IDs that contributed |
MetricResult
Result captured when a metric resource completes. Attributes:| Name | Type | Description |
|---|---|---|
name | str | Metric name |
metadata | MetricMetadata | Snapshot of metadata at completion |
assertion_results | list[AssertionResult] | Assertions evaluated in metric teardown |
value | CalculatedValue | Final yielded value (or NaN if none) |
MetricResult instances are automatically collected and included in merit run reports.
Example:
Context Manager
metrics()
Attach metrics to assertions for automatic data collection. Signature:| Name | Type | Description |
|---|---|---|
metrics | Metric | Metrics to record assertion outcomes into |
- When an assertion passes inside the context, records
Trueto all metrics - When an assertion fails, records
Falseto all metrics - Multiple assertions in one context each contribute a data point
- Works with both standard assertions and predicate assertions