Skip to main content
Perfect for SaaS apps, AI chatbots, content generation tools, and any LLM-powered application that needs usage-based billing.

Quick Start

Get started with automatic LLM token tracking in just 2 minutes:
1

Install the SDK

Install the Dodo Payments Ingestion Blueprints:
npm install @dodopayments/ingestion-blueprints
2

Get Your API Keys

You’ll need two API keys:
  • Dodo Payments API Key: Get it from Dodo Payments Dashboard
  • LLM Provider API Key: From AI SDK, OpenAI, Anthropic, Groq, etc.
Store your API keys securely in environment variables. Never commit them to version control.
3

Create a Meter in Dodo Payments

Before tracking usage, create a meter in your Dodo Payments dashboard:
  1. Login to Dodo Payments Dashboard
  2. Navigate to Products → Meters
  3. Click “Create Meter”
  4. Configure your meter:
    • Meter Name: Choose a descriptive name (e.g., “LLM Token Usage”)
    • Event Name: Set a unique event identifier (e.g., llm.chat_completion)
    • Aggregation Type: Select sum to add up token counts
    • Over Property: Choose what to track:
      • inputTokens - Track input/prompt tokens
      • outputTokens - Track output/completion tokens (includes reasoning tokens when applicable)
      • totalTokens - Track combined input + output tokens
The Event Name you set here must match exactly what you pass to the SDK (case-sensitive).
For detailed instructions, see the Usage-Based Billing Guide.
4

Track Token Usage

Wrap your LLM client and start tracking automatically:
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import { generateText } from 'ai';
import { google } from '@ai-sdk/google';

const llmTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'aisdk.usage',
});

const client = llmTracker.wrap({
  client: { generateText },
  customerId: 'customer_123'
});

const response = await client.generateText({
  model: google('gemini-2.0-flash'),
  prompt: 'Hello!',
  maxOutputTokens: 500
});

console.log('Usage:', response.usage);
That’s it! Every API call now automatically tracks token usage and sends events to Dodo Payments for billing.

Configuration

Tracker Configuration

Create a tracker once at application startup with these required parameters:
apiKey
string
required
Your Dodo Payments API key. Get it from the API Keys page.
apiKey: process.env.DODO_PAYMENTS_API_KEY
environment
string
required
The environment mode for the tracker.
  • test_mode - Use for development and testing
  • live_mode - Use for production
environment: 'test_mode' // or 'live_mode'
Always use test_mode during development to avoid affecting production metrics.
eventName
string
required
The event name that triggers your meter. Must match exactly what you configured in your Dodo Payments meter (case-sensitive).
eventName: 'llm.chat_completion'
This event name links your tracked usage to the correct meter for billing calculations.

Wrapper Configuration

When wrapping your LLM client, provide these parameters:
client
object
required
Your LLM client instance (OpenAI, Anthropic, Groq, etc.).
client: openai
customerId
string
required
The unique customer identifier for billing. This should match your customer ID in Dodo Payments.
customerId: 'customer_123'
Use your application’s user ID or customer ID to ensure accurate billing per customer.
metadata
object
Optional additional data to attach to the tracking event. Useful for filtering and analysis.
metadata: {
  feature: 'chat',
  userTier: 'premium',
  sessionId: 'session_123',
  modelVersion: 'gpt-4'
}

Complete Configuration Example

import { createLLMTracker } from "@dodopayments/ingestion-blueprints";
import { generateText } from "ai";
import { google } from "@ai-sdk/google";
import "dotenv/config";

async function aiSdkExample() {
  console.log("🤖 AI SDK Simple Usage Example\n");

  try {
    // 1. Create tracker
    const llmTracker = createLLMTracker({
      apiKey: process.env.DODO_PAYMENTS_API_KEY!,
      environment: "test_mode",
      eventName: "your_meter_event_name",
    });

    // 2. Wrap the ai-sdk methods
    const client = llmTracker.wrap({
      client: { generateText },
      customerId: "customer_123",
      metadata: {
        provider: "ai-sdk",
      },
    });

    // 3. Use the wrapped function
    const response = await client.generateText({
      model: google("gemini-2.5-flash"),
      prompt: "Hello, I am a cool guy! Tell me a fun fact.",
      maxOutputTokens: 500,
    });

    console.log(response);
    console.log(response.usage);
    console.log("✅ Automatically tracked for customer\n");
  } catch (error) {
    console.error(error);
  }
}

aiSdkExample().catch(console.error);
Automatic Tracking: The SDK automatically tracks token usage in the background without modifying the response. Your code remains clean and identical to using the original provider SDKs.

Supported Providers

The LLM Blueprint works seamlessly with all major LLM providers and aggregators:
Track usage with the Vercel AI SDK for universal LLM support.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import { generateText } from 'ai';
import { google } from '@ai-sdk/google';

const llmTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'aisdk.usage',
});

const client = llmTracker.wrap({
  client: { generateText },
  customerId: 'customer_123',
  metadata: {
    model: 'gemini-2.0-flash',
    feature: 'chat'
  }
});

const response = await client.generateText({
  model: google('gemini-2.0-flash'),
  prompt: 'Explain neural networks',
  maxOutputTokens: 500
});

console.log('Usage:', response.usage);
Tracked Metrics:
  • inputTokensinputTokens
  • outputTokens + reasoningTokensoutputTokens
  • totalTokenstotalTokens
  • Model name
When using reasoning-capable models through AI SDK (like Google’s Gemini 2.5 Flash with thinking mode), reasoning tokens are automatically included in the outputTokens count for accurate billing.
Track token usage across 200+ models via OpenRouter’s unified API.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

// OpenRouter uses OpenAI-compatible API
const openrouter = new OpenAI({
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY
});

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'openrouter.usage'
});

const client = tracker.wrap({ 
  client: openrouter, 
  customerId: 'user_123',
  metadata: { provider: 'openrouter' }
});

const response = await client.chat.completions.create({
  model: 'qwen/qwen3-max',
  messages: [{ role: 'user', content: 'What is machine learning?' }],
  max_tokens: 500
});

console.log('Response:', response.choices[0].message.content);
console.log('Usage:', response.usage);
Tracked Metrics:
  • prompt_tokensinputTokens
  • completion_tokensoutputTokens
  • total_tokenstotalTokens
  • Model name
OpenRouter provides access to models from OpenAI, Anthropic, Google, Meta, and many more providers through a single API.
Track token usage from OpenAI’s GPT models automatically.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'openai.usage'
});

const client = tracker.wrap({ 
  client: openai, 
  customerId: 'user_123' 
});

// All OpenAI methods work automatically
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Explain quantum computing' }]
});

console.log('Total tokens:', response.usage.total_tokens);
Tracked Metrics:
  • prompt_tokensinputTokens
  • completion_tokensoutputTokens
  • total_tokenstotalTokens
  • Model name
Track token usage from Anthropic’s Claude models.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'anthropic.usage'
});

const client = tracker.wrap({ 
  client: anthropic, 
  customerId: 'user_123' 
});

const response = await client.messages.create({
  model: 'claude-sonnet-4-0',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Explain machine learning' }]
});

console.log('Input tokens:', response.usage.input_tokens);
console.log('Output tokens:', response.usage.output_tokens);
Tracked Metrics:
  • input_tokensinputTokens
  • output_tokensoutputTokens
  • Calculated totalTokens
  • Model name
Track ultra-fast LLM inference with Groq.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import Groq from 'groq-sdk';

const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'groq.usage'
});

const client = tracker.wrap({ 
  client: groq, 
  customerId: 'user_123' 
});

const response = await client.chat.completions.create({
  model: 'llama-3.1-8b-instant',
  messages: [{ role: 'user', content: 'What is AI?' }]
});

console.log('Tokens:', response.usage);
Tracked Metrics:
  • prompt_tokensinputTokens
  • completion_tokensoutputTokens
  • total_tokenstotalTokens
  • Model name
Track token usage from Google’s Gemini models via the Google GenAI SDK.
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import { GoogleGenAI } from '@google/genai';

const googleGenai = new GoogleGenAI({ 
  apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY 
});

const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'test_mode',
  eventName: 'gemini.usage'
});

const client = tracker.wrap({ 
  client: googleGenai, 
  customerId: 'user_123' 
});

const response = await client.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: 'Explain quantum computing'
});

console.log('Response:', response.text);
console.log('Usage:', response.usageMetadata);
Tracked Metrics:
  • promptTokenCountinputTokens
  • candidatesTokenCount + thoughtsTokenCountoutputTokens
  • totalTokenCounttotalTokens
  • Model version
Gemini Thinking Mode: When using Gemini models with thinking/reasoning capabilities (like Gemini 2.5 Pro), the SDK automatically includes thoughtsTokenCount (reasoning tokens) in outputTokens to accurately reflect the full computational cost.

Advanced Usage

Multiple Providers

Track usage across different LLM providers with separate trackers:
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';
import Groq from 'groq-sdk';
import Anthropic from '@anthropic-ai/sdk';
import { GoogleGenAI } from '@google/genai';

// Create separate trackers for different providers
const openaiTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'openai.usage'
});

const groqTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'groq.usage'
});

const anthropicTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'anthropic.usage'
});

const geminiTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'gemini.usage'
});

const openrouterTracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: 'live_mode',
  eventName: 'openrouter.usage'
});

// Initialize clients
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const googleGenai = new GoogleGenAI({ apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY });
const openrouter = new OpenAI({ 
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY 
});

// Wrap clients
const trackedOpenAI = openaiTracker.wrap({ client: openai, customerId: 'user_123' });
const trackedGroq = groqTracker.wrap({ client: groq, customerId: 'user_123' });
const trackedAnthropic = anthropicTracker.wrap({ client: anthropic, customerId: 'user_123' });
const trackedGemini = geminiTracker.wrap({ client: googleGenai, customerId: 'user_123' });
const trackedOpenRouter = openrouterTracker.wrap({ client: openrouter, customerId: 'user_123' });

// Use whichever provider you need
const response = await trackedOpenAI.chat.completions.create({...});
// or
const geminiResponse = await trackedGemini.models.generateContent({...});
// or
const openrouterResponse = await trackedOpenRouter.chat.completions.create({...});
Use different event names for different providers to track usage separately in your meters.

Express.js API Integration

Complete example of integrating LLM tracking into an Express.js API:
import express from 'express';
import { createLLMTracker } from '@dodopayments/ingestion-blueprints';
import OpenAI from 'openai';

const app = express();
app.use(express.json());

// Initialize OpenAI client
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Create tracker once at startup
const tracker = createLLMTracker({
  apiKey: process.env.DODO_PAYMENTS_API_KEY,
  environment: process.env.NODE_ENV === 'production' ? 'live_mode' : 'test_mode',
  eventName: 'api.chat_completion'
});

// Chat endpoint with automatic tracking
app.post('/api/chat', async (req, res) => {
  try {
    const { message, userId } = req.body;
    
    // Validate input
    if (!message || !userId) {
      return res.status(400).json({ error: 'Missing message or userId' });
    }
    
    // Wrap client for this specific user
    const trackedClient = tracker.wrap({
      client: openai,
      customerId: userId,
      metadata: { 
        endpoint: '/api/chat',
        timestamp: new Date().toISOString()
      }
    });
    
    // Make LLM request - automatically tracked
    const response = await trackedClient.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: message }],
      temperature: 0.7
    });
    
    const completion = response.choices[0].message.content;
    
    res.json({ 
      message: completion,
      usage: response.usage
    });
  } catch (error) {
    console.error('Chat error:', error);
    res.status(500).json({ error: 'Internal server error' });
  }
});

app.listen(3000, () => {
  console.log('Server running on port 3000');
});

What Gets Tracked

Every LLM API call automatically sends a usage event to Dodo Payments with the following structure:
{
  "event_id": "llm_1673123456_abc123",
  "customer_id": "customer_123",
  "event_name": "llm.chat_completion",
  "timestamp": "2024-01-08T10:30:00Z",
  "metadata": {
    "inputTokens": 10,
    "outputTokens": 25,
    "totalTokens": 35,
    "model": "gpt-4",
  }
}

Event Fields

event_id
string
Unique identifier for this specific event. Automatically generated by the SDK.Format: llm_[timestamp]_[random]
customer_id
string
The customer ID you provided when wrapping the client. Used for billing.
event_name
string
The event name that triggers your meter. Matches your tracker configuration.
timestamp
string
ISO 8601 timestamp when the event occurred.
metadata
object
Token usage and additional tracking data:
  • inputTokens - Number of input/prompt tokens used
  • outputTokens - Number of output/completion tokens used (includes reasoning tokens when applicable)
  • totalTokens - Total tokens (input + output)
  • model - The LLM model used (e.g., “gpt-4”)
  • provider - The LLM provider (if included in wrapper metadata)
  • Any custom metadata you provided when wrapping the client
Reasoning Tokens: For models with reasoning capabilities, outputTokens automatically includes both the completion tokens and reasoning tokens.
Your Dodo Payments meter uses the metadata fields (especially inputTokens, outputTokens or totalTokens) to calculate usage and billing.