Skip to main content
Perfect for SaaS apps, AI chatbots, content generation tools, and any LLM-powered application that needs usage-based billing.

Quick Start

Get started with automatic LLM token tracking in just 2 minutes:
1

Install the SDK

Install the Dodo Payments Ingestion Blueprints:
npm install @dodopayments/ingestion-blueprints
2

Get Your API Keys

You’ll need two API keys:
  • Dodo Payments API Key: Get it from Dodo Payments Dashboard
  • LLM Provider API Key: From OpenAI, Anthropic, Groq, etc.
Store your API keys securely in environment variables. Never commit them to version control.
3

Create a Meter in Dodo Payments

Before tracking usage, create a meter in your Dodo Payments dashboard:
  1. Login to Dodo Payments Dashboard
  2. Navigate to Products → Meters
  3. Click “Create Meter”
  4. Configure your meter:
    • Meter Name: Choose a descriptive name (e.g., “LLM Token Usage”)
    • Event Name: Set a unique event identifier (e.g., llm.chat_completion)
    • Aggregation Type: Select sum to add up token counts
    • Over Property: Choose what to track:
      • inputTokens - Track input/prompt tokens only
      • outputTokens - Track output/completion tokens only
      • totalTokens - Track combined input + output tokens
The Event Name you set here must match exactly what you pass to the SDK (case-sensitive).
For detailed instructions, see the Usage-Based Billing Guide.
4

Track Token Usage

Wrap your LLM client and start tracking automatically:
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

// 1. Create your LLM client (normal way)
const openai = new OpenAI({ 
  apiKey: process.env.OPENAI_API_KEY 
});

// 2. Create tracker ONCE at startup
const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode', // Use 'live_mode' for production
  eventName: 'llm.chat_completion' // Match your meter's event name
});

// 3. Wrap & use - automatic tracking!
const client = tracker.wrap({ 
  client: openai, 
  customerId: 'customer_123' 
});

// Every API call is now automatically tracked
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

// ✨ Usage automatically sent to Dodo Payments!
console.log('Tokens used:', response.usage);
That’s it! Every API call now automatically tracks token usage and sends events to Dodo Payments for billing.

Configuration

Tracker Configuration

Create a tracker once at application startup with these required parameters:
apiKey
string
required
Your Dodo Payments API key. Get it from the API Keys page.
apiKey: process.env.DODO_PAYMENTS_API_KEY
environment
string
required
The environment mode for the tracker.
  • test_mode - Use for development and testing
  • live_mode - Use for production
environment: 'test_mode' // or 'live_mode'
Always use test_mode during development to avoid affecting production metrics.
eventName
string
required
The event name that triggers your meter. Must match exactly what you configured in your Dodo Payments meter (case-sensitive).
eventName: 'llm.chat_completion'
This event name links your tracked usage to the correct meter for billing calculations.

Wrapper Configuration

When wrapping your LLM client, provide these parameters:
client
object
required
Your LLM client instance (OpenAI, Anthropic, Groq, etc.).
client: openai
customerId
string
required
The unique customer identifier for billing. This should match your customer ID in Dodo Payments.
customerId: 'customer_123'
Use your application’s user ID or customer ID to ensure accurate billing per customer.
metadata
object
Optional additional data to attach to the tracking event. Useful for filtering and analysis.
metadata: {
  feature: 'chat',
  userTier: 'premium',
  sessionId: 'session_123',
  modelVersion: 'gpt-4'
}

Complete Configuration Example

import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

// Initialize LLM client
const openai = new OpenAI({ 
  apiKey: process.env.OPENAI_API_KEY 
});

// Create tracker with full configuration
const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: process.env.NODE_ENV === 'production' ? 'live_mode' : 'test_mode',
  eventName: 'llm.chat_completion'
});

// Wrap client with metadata
const trackedClient = tracker.wrap({
  client: openai,
  customerId: 'customer_123',
  metadata: {
    feature: 'chat_completion',
    userTier: 'premium',
    sessionId: 'sess_abc123',
    endpoint: '/api/chat'
  }
});

// Use normally - tracking is automatic
const response = await trackedClient.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log('Response:', response);
Automatic Tracking: The SDK automatically tracks token usage in the background without modifying the response. Your code remains clean and identical to using the original provider SDKs.

Supported Providers

The LLM Blueprint works seamlessly with all major LLM providers:
Track token usage from OpenAI’s GPT models automatically.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'openai.usage'
});

const client = tracker.wrap({ 
  client: openai, 
  customerId: 'user_123' 
});

// All OpenAI methods work automatically
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Explain quantum computing' }]
});

console.log('Total tokens:', response.usage.total_tokens);
Tracked Metrics:
  • prompt_tokensinputTokens
  • completion_tokensoutputTokens
  • total_tokenstotalTokens
  • Model name
Track token usage from Anthropic’s Claude models.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'anthropic.usage'
});

const client = tracker.wrap({ 
  client: anthropic, 
  customerId: 'user_123' 
});

const response = await client.messages.create({
  model: 'claude-sonnet-4-0',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Explain machine learning' }]
});

console.log('Input tokens:', response.usage.input_tokens);
console.log('Output tokens:', response.usage.output_tokens);
Tracked Metrics:
  • input_tokensinputTokens
  • output_tokensoutputTokens
  • Calculated totalTokens
  • Model name
Track ultra-fast LLM inference with Groq.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import Groq from 'groq-sdk';

const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'groq.usage'
});

const client = tracker.wrap({ 
  client: groq, 
  customerId: 'user_123' 
});

const response = await client.chat.completions.create({
  model: 'llama-3.1-8b-instant',
  messages: [{ role: 'user', content: 'What is AI?' }]
});

console.log('Tokens:', response.usage);
Tracked Metrics:
  • prompt_tokensinputTokens
  • completion_tokensoutputTokens
  • total_tokenstotalTokens
  • Model name
Track usage with the Vercel AI SDK for universal LLM support.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import { generateText } from 'ai';
import { google } from '@ai-sdk/google';

const llmTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'aisdk.usage',
});

const client = llmTracker.wrap({
  client: { generateText },
  customerId: 'customer_123',
  metadata: {
    model: 'gemini-2.0-flash',
    feature: 'chat'
  }
});

const response = await client.generateText({
  model: google('gemini-2.0-flash'),
  prompt: 'Explain neural networks',
  maxOutputTokens: 500
});

console.log('Usage:', response.usage);
Tracked Metrics:
  • Native AI SDK usage format
  • Automatically normalized to standard format
  • Model name

Advanced Usage

Multiple Providers

Track usage across different LLM providers with separate trackers:
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';
import Groq from 'groq-sdk';
import Anthropic from '@anthropic-ai/sdk';

// Create separate trackers for different providers
const openaiTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'openai.usage'
});

const groqTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'groq.usage'
});

const anthropicTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'anthropic.usage'
});

// Initialize clients
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

// Wrap clients
const trackedOpenAI = openaiTracker.wrap({ client: openai, customerId: 'user_123' });
const trackedGroq = groqTracker.wrap({ client: groq, customerId: 'user_123' });
const trackedAnthropic = anthropicTracker.wrap({ client: anthropic, customerId: 'user_123' });

// Use whichever provider you need
const response = await trackedOpenAI.chat.completions.create({...});
Use different event names for different providers to track usage separately in your meters.

Express.js API Integration

Complete example of integrating LLM tracking into an Express.js API:
import express from 'express';
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

const app = express();
app.use(express.json());

// Initialize OpenAI client
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Create tracker once at startup
const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: process.env.NODE_ENV === 'production' ? 'live_mode' : 'test_mode',
  eventName: 'api.chat_completion'
});

// Chat endpoint with automatic tracking
app.post('/api/chat', async (req, res) => {
  try {
    const { message, userId } = req.body;
    
    // Validate input
    if (!message || !userId) {
      return res.status(400).json({ error: 'Missing message or userId' });
    }
    
    // Wrap client for this specific user
    const trackedClient = tracker.wrap({
      client: openai,
      customerId: userId,
      metadata: { 
        endpoint: '/api/chat',
        timestamp: new Date().toISOString()
      }
    });
    
    // Make LLM request - automatically tracked
    const response = await trackedClient.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: message }],
      temperature: 0.7
    });
    
    const completion = response.choices[0].message.content;
    
    res.json({ 
      message: completion,
      usage: response.usage
    });
  } catch (error) {
    console.error('Chat error:', error);
    res.status(500).json({ error: 'Internal server error' });
  }
});

app.listen(3000, () => {
  console.log('Server running on port 3000');
});

What Gets Tracked

Every LLM API call automatically sends a usage event to Dodo Payments with the following structure:
{
  "event_id": "llm_1673123456_abc123",
  "customer_id": "customer_123",
  "event_name": "llm.chat_completion",
  "timestamp": "2024-01-08T10:30:00Z",
  "metadata": {
    "inputTokens": 10,
    "outputTokens": 25,
    "totalTokens": 35,
    "model": "gpt-4",
    "provider": "openai"
  }
}

Event Fields

event_id
string
Unique identifier for this specific event. Automatically generated by the SDK.Format: llm_[timestamp]_[random]
customer_id
string
The customer ID you provided when wrapping the client. Used for billing.
event_name
string
The event name that triggers your meter. Matches your tracker configuration.
timestamp
string
ISO 8601 timestamp when the event occurred.
metadata
object
Token usage and additional tracking data:
  • inputTokens - Number of input/prompt tokens used
  • outputTokens - Number of output/completion tokens used
  • totalTokens - Total tokens (input + output)
  • model - The LLM model used (e.g., “gpt-4”)
  • provider - The LLM provider (if included in wrapper metadata)
  • Any custom metadata you provided when wrapping the client
Your Dodo Payments meter uses the metadata fields (especially totalTokens, inputTokens, or outputTokens) to calculate usage and billing.