Getting Started
Install noumen, wire up a provider and a sandbox, and ship your first coding agent in minutes.
Installation
pnpm add noumennoumen requires Node.js 18 or later.
Setup
Every noumen agent needs two things: an AI provider and a sandbox.
Choose a provider
noumen wraps any Vercel AI SDK LanguageModel via AiSdkProvider. Install the vendor SDK for the provider you want and hand its model instance to noumen. Cloud providers take an API key; Ollama runs locally with no key needed.
pnpm add @ai-sdk/openaiimport { AiSdkProvider } from "noumen";
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const provider = new AiSdkProvider({ model: openai.chat("gpt-5") });pnpm add @ai-sdk/anthropicimport { AiSdkProvider } from "noumen";
import { createAnthropic } from "@ai-sdk/anthropic";
const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const provider = new AiSdkProvider({
model: anthropic("claude-opus-4.6"),
providerFamily: "anthropic",
cacheConfig: { enabled: true },
});pnpm add @ai-sdk/googleimport { AiSdkProvider } from "noumen";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
const google = createGoogleGenerativeAI({ apiKey: process.env.GEMINI_API_KEY! });
const provider = new AiSdkProvider({
model: google("gemini-2.5-flash"),
providerFamily: "google",
});pnpm add @openrouter/ai-sdk-providerimport { AiSdkProvider } from "noumen";
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY! });
const provider = new AiSdkProvider({
model: openrouter.chat("anthropic/claude-opus-4.6"),
});pnpm add @ai-sdk/amazon-bedrockimport { AiSdkProvider } from "noumen";
import { createAmazonBedrock } from "@ai-sdk/amazon-bedrock";
const bedrock = createAmazonBedrock({ region: process.env.AWS_REGION });
const provider = new AiSdkProvider({
model: bedrock("us.anthropic.claude-opus-4.6-v1:0"),
providerFamily: "anthropic",
cacheConfig: { enabled: true },
});pnpm add @ai-sdk/google-verteximport { AiSdkProvider } from "noumen";
import { createVertex } from "@ai-sdk/google-vertex";
const vertex = createVertex({
project: process.env.GOOGLE_CLOUD_PROJECT,
location: "us-east5",
});
const provider = new AiSdkProvider({
model: vertex.anthropic("claude-opus-4.6"),
providerFamily: "anthropic",
cacheConfig: { enabled: true },
});Install Ollama and pull a model:
ollama pull qwen2.5-coder:32b
pnpm add ollama-ai-provider-v2import { AiSdkProvider } from "noumen";
import { createOllama } from "ollama-ai-provider-v2";
const ollama = createOllama();
const provider = new AiSdkProvider({ model: ollama("qwen2.5-coder:32b") });Prefer a one-liner? The string shorthand still works — pass provider: "anthropic" (or any other provider name) and noumen dynamically imports the matching @ai-sdk/* package for you.
Choose a sandbox
A sandbox bundles a filesystem and a shell into one object. All tool I/O routes through it. LocalSandbox uses OS-level sandboxing via @anthropic-ai/sandbox-runtime. For raw host access with no OS sandbox boundary, use UnsandboxedLocal instead.
import { LocalSandbox } from "noumen/local";
const sandbox = LocalSandbox({ cwd: "/path/to/your/project" });For remote isolation or different environments, swap in SpritesSandbox (from noumen/sprites), DockerSandbox (from noumen/docker), E2BSandbox (from noumen/e2b), FreestyleSandbox (from noumen/freestyle), or SshSandbox (from noumen/ssh). Each remote backend ships on its own subpath so its optional peer dep is only loaded when you opt in. See Virtual Infrastructure.
Create an Agent
The Agent class wires everything together:
import { Agent } from "noumen";
const code = new Agent({
provider: provider,
sandbox,
options: {
sessionDir: ".noumen/sessions",
model: "gpt-5",
maxTokens: 8192,
autoCompact: true,
projectContext: true,
},
});Run a prompt
Create a thread and iterate over the stream events:
const thread = code.createThread();
for await (const event of thread.run("Fix the failing test in utils.test.ts")) {
switch (event.type) {
case "text_delta":
process.stdout.write(event.text);
break;
case "tool_use_start":
console.log(`\nUsing tool: ${event.toolName}`);
break;
case "tool_result":
console.log(`Result: ${event.result.content.slice(0, 100)}`);
break;
case "message_complete":
console.log("\n--- Done ---");
break;
case "turn_complete":
console.log(`Tokens used: ${event.usage.total_tokens}`);
break;
}
}Full example
import { Agent, AiSdkProvider } from "noumen";
import { LocalSandbox } from "noumen/local";
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const code = new Agent({
provider: new AiSdkProvider({ model: openai.chat("gpt-5") }),
sandbox: LocalSandbox({ cwd: process.cwd() }),
options: {
sessionDir: ".noumen/sessions",
autoCompact: true,
},
});
const thread = code.createThread();
for await (const event of thread.run("Add input validation to the signup handler")) {
if (event.type === "text_delta") process.stdout.write(event.text);
if (event.type === "tool_use_start") console.log(`\n[${event.toolName}]`);
}Quick start with presets
For the fastest setup, use a preset instead of configuring Agent manually:
import { codingAgent, AiSdkProvider } from "noumen";
import { LocalSandbox } from "noumen/local";
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const code = codingAgent({
provider: new AiSdkProvider({ model: openai.chat("gpt-5") }),
cwd: process.cwd(),
sandbox: LocalSandbox({ cwd: process.cwd() }),
});
await code.init();
const thread = code.createThread();
for await (const event of thread.run("Add input validation to the signup handler")) {
if (event.type === "text_delta") process.stdout.write(event.text);
}
await code.close();Presets configure sensible defaults: codingAgent enables subagents, tasks, plan mode, auto-compact, retry, cost tracking, and project context. See also planningAgent (read-only) and reviewAgent (read-only + web search). Every preset requires an explicit sandbox — pick a backend from its subpath.
Shortcut: LocalAgent / UnsandboxedAgent
When you're wiring an Agent to a local sandbox anyway, the noumen/local and noumen/unsandboxed subpaths each ship a thin factory that bundles the two steps:
import { LocalAgent } from "noumen/local";
const code = LocalAgent({
provider: "anthropic",
cwd: process.cwd(),
options: { autoCompact: true, projectContext: true },
});Equivalent to new Agent({ ..., sandbox: LocalSandbox({ cwd }) }). Use UnsandboxedAgent from noumen/unsandboxed for raw host access, or stick with the explicit new Agent({ provider, sandbox }) form when you need to share a sandbox across multiple agents or use a remote backend.
What's next
- Embedding -- integrate noumen into Next.js, Electron, VS Code, or any application
- Providers -- detailed configuration for each AI provider
- Tools -- the built-in coding tools and how to add your own
- Hooks -- 18 events to intercept and customize the agent lifecycle
- Project Context -- configure NOUMEN.md / CLAUDE.md for project-level instructions
- Stream Events -- every event type your app can handle
Introduction
noumen is the agent runtime you npm install — the missing layer between LLMs and computers, with pluggable providers and virtual infrastructure.
Providers
noumen wraps any Vercel AI SDK LanguageModel — OpenAI, Anthropic, Google Gemini, OpenRouter, AWS Bedrock, Google Vertex AI, and Ollama are all first-class behind a single unified adapter.