OpenRouter
Configure the OpenRouter provider to access hundreds of models through a single API key.
Setup
pnpm add @openrouter/ai-sdk-providerimport { AiSdkProvider } from "noumen";
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY!,
});
const provider = new AiSdkProvider({
model: openrouter.chat("anthropic/claude-opus-4.6"),
});Options
All connection-level options come from createOpenRouter.
| Option | Source | Description |
|---|---|---|
apiKey | createOpenRouter | OpenRouter API key. |
baseURL | createOpenRouter | Override the API base URL. Defaults to https://openrouter.ai/api/v1. |
headers | createOpenRouter | Custom headers — use this for OpenRouter's ranking headers: HTTP-Referer (your app URL) and X-Title (your app name). |
extraBody | createOpenRouter | Extra fields to merge into every request body (for OpenRouter's provider preferences, transforms, etc.). |
App identification
OpenRouter optionally accepts headers to identify your app on their leaderboards:
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY!,
headers: {
"HTTP-Referer": "https://myapp.com",
"X-Title": "My Coding Agent",
},
});Models
OpenRouter gives you access to models from every major provider through a single API. Pass any model ID listed on openrouter.ai/models:
anthropic/claude-opus-4.6— Claude Opusanthropic/claude-sonnet-4— Claude Sonnetopenai/gpt-5— GPT-5openai/gpt-4o— GPT-4ogoogle/gemini-2.5-pro— Gemini 2.5 Prodeepseek/deepseek-r1— DeepSeek R1meta-llama/llama-4-maverick— Llama 4 Maverick
How it works
@openrouter/ai-sdk-provider speaks the AI SDK LanguageModelV2 protocol exposing OpenRouter's OpenAI-compatible API under the hood. AiSdkProvider sees a generic OpenAI-family model and forwards reasoningEffort via providerOptions.openai.reasoningEffort. If you're calling Claude models through OpenRouter and want Anthropic-specific cache breakpoints, pass providerFamily: "anthropic" and cacheConfig: { enabled: true } explicitly — OpenRouter accepts Anthropic-style cache_control markers for supported models.
Streaming
Streaming works identically to any other AI SDK provider. No additional configuration is needed.