Providers
Google Gemini
Configure the Gemini provider for Google's AI models.
Setup
pnpm add @ai-sdk/googleimport { AiSdkProvider } from "noumen";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
const google = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY!,
});
const provider = new AiSdkProvider({
model: google("gemini-2.5-flash"),
providerFamily: "google",
});Options
All connection-level options come from createGoogleGenerativeAI.
| Option | Source | Description |
|---|---|---|
apiKey | createGoogleGenerativeAI | Google AI Studio API key. |
baseURL | createGoogleGenerativeAI | Override the API base URL. |
headers | createGoogleGenerativeAI | Custom headers sent with every request. |
Streaming
AiSdkProvider routes through the AI SDK's doStream(). Gemini-specific mapping is handled by @ai-sdk/google:
- Text parts become
contentdeltas. functionCallparts becometool_callsdeltas (with JSON repair applied).- Usage metadata is emitted on
finish.
Message conversion
Handled by the AI SDK's convertToModelMessages:
- System messages become the
systemInstructionconfig parameter. - OpenAI-style
toolmessages are converted tofunctionResponseparts. - Tool call ID to function name resolution is handled automatically.
- Multiple tool results are batched into a single
userturn.
Thinking budget
Gemini 2.5 supports thinkingConfig. Enable it via AgentOptions.thinking:
const agent = new Agent({
provider,
sandbox: LocalSandbox({ cwd: "." }),
options: { thinking: { type: "enabled", budgetTokens: 8_000 } },
});AiSdkProvider maps this to providerOptions.google.thinkingConfig = { thinkingBudget: 8000 }.
Gemini behind an OpenAI-compatible proxy
If you route Gemini traffic through an OpenAI-compatible gateway (e.g. your own metered proxy that exposes /v1/chat/completions), wrap it with @ai-sdk/openai and pin to .chat(...) — noumen will classify it as the openai family and map reasoning effort accordingly:
import { createOpenAI } from "@ai-sdk/openai";
const gateway = createOpenAI({
baseURL: "https://my-proxy.example.com/google",
apiKey: userJwt,
});
const provider = new AiSdkProvider({ model: gateway.chat("gemini-2.5-flash") });Models
gemini-2.5-flash— fast, good default for coding tasks.gemini-2.5-pro— higher capability for complex tasks.gemini-2.0-flash— previous generation, still capable.