Waterfall Logo
Waterfall
PRODUCTS

Meter Every LLM Call

From Your Application Code

TypeScript and Python clients that let you meter every LLM call, query wallet balances, and settle per request. Three lines of code to add usage-based billing to your AI product.

What You Get

TypeScript & Python

Official clients for the two languages that matter most in AI. Full type safety, async support, and idiomatic APIs for each ecosystem.

Drop-In LLM Metering

Swap one import and every LLM call is automatically metered and attributed. Works with OpenAI, Anthropic, and any OpenRouter-compatible provider.

Wallet Snapshots

Query wallet balances from your application code. Let agents check their own budget before making expensive calls — enabling cost-aware reasoning.

Per-Request Settlement

Each API call settles atomically against the user's wallet via x402. No batched invoices, no webhook reconciliation, no billing edge cases.

Session Key Management

Ephemeral signing keys that auto-zero on disposal. Defense-in-depth key handling so your users' funds stay secure even if a session leaks.

Streaming Support

Full support for streamed LLM responses. Token usage and cost are tracked accurately even for long-running streaming completions.

Ready to Get Started?

Start metering LLM usage and building usage-based pricing for your AI product today.