Providers

Google Gemini

Configure the Gemini provider for Google's AI models.

Setup

pnpm add @ai-sdk/google
import { AiSdkProvider } from "noumen";
import { createGoogleGenerativeAI } from "@ai-sdk/google";

const google = createGoogleGenerativeAI({
  apiKey: process.env.GEMINI_API_KEY!,
});

const provider = new AiSdkProvider({
  model: google("gemini-2.5-flash"),
  providerFamily: "google",
});

Options

All connection-level options come from createGoogleGenerativeAI.

OptionSourceDescription
apiKeycreateGoogleGenerativeAIGoogle AI Studio API key.
baseURLcreateGoogleGenerativeAIOverride the API base URL.
headerscreateGoogleGenerativeAICustom headers sent with every request.

Streaming

AiSdkProvider routes through the AI SDK's doStream(). Gemini-specific mapping is handled by @ai-sdk/google:

  • Text parts become content deltas.
  • functionCall parts become tool_calls deltas (with JSON repair applied).
  • Usage metadata is emitted on finish.

Message conversion

Handled by the AI SDK's convertToModelMessages:

  • System messages become the systemInstruction config parameter.
  • OpenAI-style tool messages are converted to functionResponse parts.
  • Tool call ID to function name resolution is handled automatically.
  • Multiple tool results are batched into a single user turn.

Thinking budget

Gemini 2.5 supports thinkingConfig. Enable it via AgentOptions.thinking:

const agent = new Agent({
  provider,
  sandbox: LocalSandbox({ cwd: "." }),
  options: { thinking: { type: "enabled", budgetTokens: 8_000 } },
});

AiSdkProvider maps this to providerOptions.google.thinkingConfig = { thinkingBudget: 8000 }.

Gemini behind an OpenAI-compatible proxy

If you route Gemini traffic through an OpenAI-compatible gateway (e.g. your own metered proxy that exposes /v1/chat/completions), wrap it with @ai-sdk/openai and pin to .chat(...) — noumen will classify it as the openai family and map reasoning effort accordingly:

import { createOpenAI } from "@ai-sdk/openai";

const gateway = createOpenAI({
  baseURL: "https://my-proxy.example.com/google",
  apiKey: userJwt,
});
const provider = new AiSdkProvider({ model: gateway.chat("gemini-2.5-flash") });

Models

  • gemini-2.5-flash — fast, good default for coding tasks.
  • gemini-2.5-pro — higher capability for complex tasks.
  • gemini-2.0-flash — previous generation, still capable.