OpenAI
Configure the OpenAI provider for GPT-5, GPT-4o, and compatible APIs.
Setup
pnpm add @ai-sdk/openaiimport { AiSdkProvider } from "noumen";
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
baseURL: "https://...", // optional, for Azure / proxies / compatible APIs
});
const provider = new AiSdkProvider({
// `.chat(id)` pins the call to the chat/completions endpoint.
// Drop `.chat(...)` to use the Responses API instead.
model: openai.chat("gpt-5"),
});Options
All connection-level options (API key, base URL, custom headers, organization, project) come straight from createOpenAI. noumen does not re-export them — whatever the AI SDK ships, you can pass.
| Option | Source | Description |
|---|---|---|
apiKey | createOpenAI | OpenAI API key. |
baseURL | createOpenAI | Override the API base URL (for Azure, proxies, or compatible APIs). |
headers | createOpenAI | Custom headers sent with every request (useful for proxies, auth, or tracking). |
organization, project | createOpenAI | OpenAI organization and project IDs. |
Compatible APIs
The baseURL option lets you use any OpenAI-compatible API. This includes:
- Azure OpenAI — point to your Azure endpoint.
- OpenRouter — or use the dedicated OpenRouter page for rank-tracking headers.
- Local models — Ollama, vLLM, or other local servers.
import { AiSdkProvider } from "noumen";
import { createOpenAI } from "@ai-sdk/openai";
const openrouter = createOpenAI({
apiKey: process.env.OPENROUTER_API_KEY!,
baseURL: "https://openrouter.ai/api/v1",
});
const provider = new AiSdkProvider({
model: openrouter.chat("anthropic/claude-sonnet-4"),
});Chat vs Responses API
createOpenAI()(modelId) returns a Responses API model by default. That endpoint is the most feature-rich (structured output, stateful tool use, image generation), but some OpenAI-compatible proxies only implement chat/completions. When you're pointing at a proxy, pin to chat/completions:
const provider = new AiSdkProvider({ model: openai.chat("gpt-5") });Use the default openai("gpt-5") for the Responses API.
Reasoning / thinking
GPT-5 and o-series models support reasoning effort. Pass it on AgentOptions.thinking (mapped to reasoning_effort: "high") or directly via reasoningEffort: "low" | "medium" | "high" | "minimal" on ChatParams:
const agent = new Agent({
provider,
sandbox: LocalSandbox({ cwd: "." }),
options: { thinking: { type: "enabled", budgetTokens: 10_000 } },
});Streaming
AiSdkProvider uses stream: true and surfaces include_usage metadata automatically. No additional configuration is needed.
Models
Any model available through the OpenAI API works. Common choices:
gpt-5— latest flagshipgpt-4o— fast, capable, good defaultgpt-4.1— previous generationgpt-4o-mini— cheaper, faster for simpler taskso3-mini/o4-mini— reasoning models