Overview
The Model Context Protocol (MCP) is an open standard that enables AI applications to securely connect to external data sources and APIs. The Dodo Payments MCP Server provides AI assistants like Claude, Cursor, and other MCP-compatible clients with structured access to your payment infrastructure. The Dodo Payments MCP Server uses the Code Mode architecture. Instead of exposing hundreds of individual tools for every API endpoint, Code Mode enables AI agents to write and execute TypeScript code against the Dodo Payments SDK in an isolated sandbox environment.Key capabilities
- Payment Operations: Create, retrieve, and manage payments and refunds
- Subscription Management: Handle recurring billing, upgrades, and cancellations
- Customer Administration: Manage customer data and portal access
- Product Catalog: Create and update products, pricing, and discounts
- License Management: Activate, validate, and manage software licenses
- Usage-Based Billing: Track and bill for metered usage
How Code Mode Works
The Dodo Payments MCP Server provides your AI agent with exactly two tools:- Docs Search Tool: Queries documentation about the Dodo Payments API and SDK to understand available operations and parameters.
- Code Execution Tool: Writes TypeScript code against the SDK that executes in a secure sandbox environment.
Quick Setup
Connect to the Dodo Payments MCP Server in your AI client:- Cursor
- Claude Desktop
- Windsurf
- Claude Code
Add to
~/.cursor/mcp.json:Requires Node.js 18 or higher. The remote server uses OAuth for authentication — you will be prompted to enter your API key and select your environment on first connection.
Dodo Knowledge MCP
In addition to the Dodo Payments MCP Server (for executing API operations), we provide Dodo Knowledge MCP—a semantic search server that gives AI assistants instant access to Dodo Payments documentation and knowledge base.Built with ContextMCP.ai: Dodo Knowledge MCP is powered by ContextMCP, enabling fast semantic search across our documentation using vector embeddings.
What is Dodo Knowledge MCP?
Dodo Knowledge MCP is a remote MCP server that provides:- Semantic Documentation Search: Find relevant documentation using natural language queries.
- Contextual Answers: AI assistants get accurate, up-to-date information about Dodo Payments.
- Zero Setup: No API keys or local installation required—just connect and start querying.
Quick Setup
Connect to Dodo Knowledge MCP in your AI client:- Cursor
- Claude Desktop
- Windsurf
- Claude Code
Add to
~/.cursor/mcp.json:Requires Node.js 18 or higher. The
mcp-remote package handles the connection to the remote MCP server.Using Both MCP Servers Together
For the best AI-assisted development experience, we recommend using both MCP servers:| Server | Purpose | Use Case |
|---|---|---|
| Dodo Knowledge MCP | Documentation search | ”How do I handle webhooks?”, “What payment methods are supported?” |
| Dodo Payments MCP | API operations | Create payments, manage subscriptions, handle refunds |
Troubleshooting Knowledge MCP
If you encounter connection issues:- Clear MCP authentication cache:
rm -rf ~/.mcp-auth - Restart your client application
- Check client logs for error messages
- Verify Node.js version: Requires Node.js 18+
Knowledge MCP Server
Access the Dodo Knowledge MCP configuration page
Installation
Choose the installation method that best fits your workflow.Remote MCP Server (Recommended)
Access the hosted MCP server without any local setup or installation. This is the fastest way to get started.Access the remote server
Navigate to https://mcp.dodopayments.com in your browser.
Configure your MCP client
Copy the provided JSON configuration for your specific client. For Cursor or Claude Desktop, add this to your MCP settings:
Authenticate and configure
The OAuth flow will prompt you to:
- Enter your Dodo Payments API key
- Select your environment (test or live)
- Choose your MCP client type
NPM Package
Install and run the MCP server locally using NPM.- NPX (No Installation)
- MCP Client Configuration
Docker
Run the MCP server in a containerized environment for consistent deployment.Docker images are available on GitHub Container Registry.
Client Configuration
Configure the Dodo Payments MCP server in your preferred AI client.- Cursor
- Claude Desktop
- Claude Code
- VS Code
- Cline (VS Code)
- Zed
- Other Clients
Set up the Dodo Payments MCP server in Cursor to enable conversational access to your payments data.One-Click InstallUse the button below to install the MCP server directly in Cursor:
After clicking, set your environment variables in Cursor’s
mcp.json via Cursor Settings > Tools & MCP > New MCP Server.Manual ConfigurationOpen Cursor settings
Navigate to Cursor Settings > Features > Model Context Protocol or press
Cmd/Ctrl + Shift + P and search for “MCP Settings”.Add Dodo Payments configuration
Choose one of the following configurations:Remote Server (Recommended)Local NPX
Environment Variables
Configure the MCP server behavior using environment variables.| Variable | Description | Required |
|---|---|---|
DODO_PAYMENTS_API_KEY | Your Dodo Payments API key | Yes |
DODO_PAYMENTS_WEBHOOK_KEY | Your webhook signing key | No |
DODO_PAYMENTS_ENVIRONMENT | Set to live_mode for production | No |
Running Remotely
Deploy the MCP server as a remote HTTP server for web-based clients or agentic workflows.Remote Server Configuration
Once deployed, clients can connect using the server URL:Authorization Headers
The remote server accepts authentication via the following headers:| Header | Description |
|---|---|
Authorization | Bearer token authentication |
x-dodo-payments-api-key | Direct API key header |
Security Best Practices
Code Mode provides inherent security by executing code in a sandboxed environment and injecting API keys server-side. Follow these additional best practices to protect your credentials.API Key Management
API Key Management
Never commit credentials to version controlStore API keys in environment variables or secure secret management systems.Rotate keys regularlyGenerate new API keys periodically and revoke old ones through your Dodo Payments dashboard.Use test keys for developmentAlways use test mode API keys during development to avoid affecting production data.
Access Control
Access Control
Implement authentication for remote serversWhen deploying remotely, always require authentication via the
Authorization header or x-dodo-payments-api-key header.Monitor API usageTrack MCP server activity through your Dodo Payments dashboard and set up alerts for unusual patterns.Network Security
Network Security
Use HTTPS for remote serversAlways deploy remote MCP servers behind HTTPS endpoints.Implement rate limitingProtect against abuse by implementing rate limits at both the MCP server and API levels.Restrict network accessConfigure firewall rules to limit which clients can connect to your MCP server.
Troubleshooting
Connection issues
Connection issues
Verify your API keyEnsure your API key is correctly set and has the necessary permissions.Check your network connectionVerify you can reach the Dodo Payments API endpoints.Review client logsEnable verbose logging in your MCP client to diagnose connection problems.
Authentication errors
Authentication errors
Confirm API key environmentEnsure you’re using test keys with test endpoints and live keys with production endpoints.Check environment variableVerify
DODO_PAYMENTS_ENVIRONMENT is set correctly (live_mode for production).Regenerate credentialsIf issues persist, generate a new API key through your dashboard.Tool execution failures
Tool execution failures
Validate input parametersEnsure the AI assistant is providing correctly formatted parameters for each tool.Review error messagesCheck the error response from the API for specific guidance on what went wrong.Test with API directlyVerify the operation works when calling the Dodo Payments API directly via curl or Postman.
Why Code Mode
Traditional MCP implementations often suffer from “tool proliferation,” where every API endpoint is exposed as a separate tool. Code Mode is a superior approach for several reasons:LLMs are better at writing code than calling tools
LLMs have been trained on millions of lines of real-world code, making them naturally proficient at writing scripts. In contrast, tool-calling is often based on synthetic examples.“Making an LLM perform tasks with tool calling is like putting Shakespeare through a month-long class in Mandarin and then asking him to write a play in it.” — Cloudflare
Eliminates context window bloat
In a traditional approach, every tool definition consumes tokens before the conversation even starts. Exposing 50+ tools can easily eat 55K–100K+ tokens. Anthropic found that tool definitions could consume up to 134K tokens before optimization. With Code Mode, only 2 tool definitions are loaded (~1K tokens). The agent searches for the documentation it needs on-demand. Anthropic’s Tool Search Tool preserved 95% of the context window, reducing overhead from 77K to 8.7K tokens.Reduces latency via programmatic orchestration
Traditional tool-calling requires a full model inference round-trip for every single operation. If a task requires 20 API calls, that’s 20 round-trips. In Code Mode, the agent writes one script that executes all calls and returns only the final result. Anthropic observed a 37% reduction in tokens and improved accuracy (knowledge retrieval improved from 25.6% to 28.5%) using this programmatic approach.More secure by design
Code Mode provides inherent security benefits:- No API keys in parameters: API keys are injected server-side and never exposed in the tool parameters sent to the LLM.
- Isolated sandbox: Code runs in a secure environment with no access to the network or the host filesystem.
- Controlled SDK: Only authorized SDK methods are available to the agent.
Scales to any API size
As an API grows, traditional MCP performance degrades because more tools must be loaded into the context. Code Mode remains constant with 2 tools regardless of the API’s surface area. Cloudflare successfully collapsed over 2,500 API endpoints into just 2 tools and approximately 1,000 tokens of context.For more details on the benefits of this architecture, see the engineering blogs from Anthropic and Cloudflare, and the Programmatic Tool Calling documentation from Claude.