CLI
Run noumen as a terminal coding agent with any provider. Interactive REPL, one-shot mode, JSONL output, and session management.
noumen ships a CLI that wraps the library for direct terminal use. It auto-detects your provider from environment variables, loads .noumen/config.json for project settings, and reads NOUMEN.md files for project context.
Quick start
# Set an API key
export ANTHROPIC_API_KEY=sk-ant-...
# Interactive mode
npx noumen
# One-shot
npx noumen "Add error handling to server.ts"
# Explicit provider and model
npx noumen -p openai -m gpt-4o "Write tests for utils.ts"Setup
Run noumen init to create a config file interactively:
npx noumen initThis creates .noumen/config.json and optionally a NOUMEN.md file in your project root.
Config file
Place .noumen/config.json at your project root (or any ancestor directory). The CLI walks up from the working directory to find it.
{
"provider": "anthropic",
"model": "claude-sonnet-4",
"permissions": "acceptEdits",
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
}
},
"lsp": {
"typescript": {
"command": "typescript-language-server",
"args": ["--stdio"],
"fileExtensions": [".ts", ".tsx"]
}
}
}All fields are optional. The full set of config keys:
| Key | Type | Description |
|---|---|---|
provider | string | Provider name |
model | string | Model identifier |
apiKey | string | API key (prefer env vars instead) |
permissions | string | Permission mode |
thinking | string | off, low, medium, high |
sandbox | string | local, docker, e2b, sprites |
mcpServers | object | MCP server configs (same as AgentOptions) |
lsp | object | LSP server configs |
hooks | array | Hook definitions |
autoCompact | boolean | Enable auto-compaction (default: true) |
enableSubagents | boolean | Enable subagent tool |
enableTasks | boolean | Enable task management tools |
enablePlanMode | boolean | Enable plan mode tools |
webSearch | object | Web search config |
maxTurns | number | Default max agent turns |
systemPrompt | string | Custom system prompt |
sessionDir | string | Session storage directory |
Flags
Usage: noumen [options] [prompt...]
Options:
-p, --provider <name> openai | anthropic | gemini | openrouter | bedrock | vertex | ollama
-m, --model <model> Model name
--api-key <key> API key (overrides env vars)
--base-url <url> Override provider base URL
--cwd <dir> Working directory
--permission <mode> Permission mode (default, plan, acceptEdits, auto, bypassPermissions, dontAsk)
--thinking <level> off | low | medium | high
--max-turns <n> Max agent turns
--no-sandbox Disable OS-level sandboxing (use UnsandboxedLocal)
--sandbox-allow-write <paths> Comma-separated paths allowed for writes in addition to cwd (LocalSandbox)
--sandbox-allow-domain <domains> Comma-separated domains allowed for network access in the sandbox
--json Emit JSONL stream events to stdout
--quiet Only output final text
--verbose Show tool calls and thinking
--headless NDJSON stdin/stdout protocol for programmatic control
-c, --prompt <text> One-shot prompt (non-interactive)
Commands:
init Create .noumen/config.json
sessions List past sessions
resume <session-id> Resume a previous session
doctor Run health checks on provider, sandbox, MCP, and LSPFlags override config file values.
API key resolution
The CLI resolves API keys in this order:
--api-keyflag- Provider-specific env var:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GEMINI_API_KEY,OPENROUTER_API_KEY NOUMEN_API_KEY(generic fallback)apiKeyin.noumen/config.json
If no --provider is specified, the CLI auto-detects from whichever env var is set. Ollama, Bedrock, and Vertex do not require an API key.
Provider auto-detection
When no provider is specified, the CLI checks environment variables in order, then probes for a local Ollama server:
ANTHROPIC_API_KEY→ anthropicOPENAI_API_KEY→ openaiGEMINI_API_KEY→ geminiOPENROUTER_API_KEY→ openrouterAWS_ACCESS_KEY_IDorAWS_PROFILE→ bedrockGOOGLE_APPLICATION_CREDENTIALSorGCLOUD_PROJECT→ vertexOLLAMA_HOST→ ollama- Probe
http://localhost:11434→ ollama (if reachable)
Interactive mode
When run without a prompt, the CLI enters an interactive REPL. The conversation persists across messages within the same session.
REPL commands
| Command | Description |
|---|---|
/quit, /exit | Exit |
/new | Start a new conversation |
/session | Show current session ID |
/sessions | List saved sessions |
/cost | Show token usage and cost |
/verbose | Toggle verbose output |
/help | Show available commands |
One-shot mode
Pass a prompt as an argument or with -c:
npx noumen "Refactor the auth module"
npx noumen -c "List all exported functions in src/"Or pipe from stdin:
cat requirements.md | npx noumen -p anthropic
echo "Fix the failing test" | npx noumenJSONL output
Use --json to emit every stream event as a JSON line to stdout. This is useful for piping to other tools or building custom UIs:
npx noumen --json -c "Describe the project structure" | jq '.type'Use --quiet to suppress everything except the final text output:
RESULT=$(npx noumen --quiet -c "What does server.ts do?")Headless mode
Use --headless for programmatic subprocess control. The agent communicates via bidirectional NDJSON over stdin/stdout:
npx noumen --headless -p anthropic -m claude-sonnet-4The process emits {"type":"ready"} when initialized. Send commands as JSON lines on stdin:
{"type":"prompt","text":"Fix the failing test"}Receive events as JSON lines on stdout:
{"type":"session_created","sessionId":"abc-123"}
{"type":"text_delta","text":"Looking at the test...","sessionId":"abc-123"}
{"type":"session_done","sessionId":"abc-123"}This works from any language that can spawn a subprocess. See the Server API Reference for the full protocol and examples in Node.js and Python.
Session management
Sessions are saved as JSONL files in .noumen/sessions/ by default.
# List sessions
npx noumen sessions
# Resume by ID (prefix match supported)
npx noumen resume a1b2c3d4Health checks
Run noumen doctor to verify your provider, sandbox, MCP servers, and LSP servers are all working:
npx noumen doctorExample output:
noumen doctor
✓ Provider (claude-sonnet-4) (342ms)
✓ Sandbox: filesystem (2ms)
✓ Sandbox: shell (45ms)
✓ MCP: filesystem (120ms) connected, 5 tools
✓ LSP: typescript running
Overall: healthyThe command exits with code 0 when all core checks pass, or 1 otherwise. Useful in CI pipelines or setup scripts.
The same check is available programmatically via code.diagnose() — see Embedding for details.
Project context
The CLI automatically loads NOUMEN.md and CLAUDE.md files following the hierarchical convention:
~/.noumen/NOUMEN.md— global user instructions./NOUMEN.md— project root./.noumen/rules/*.md— scoped rules./NOUMEN.local.md— gitignored local overrides
See Project Context for details.
Thinking levels
The --thinking flag maps to provider-specific extended thinking:
| Level | Budget tokens |
|---|---|
off | Disabled |
low | 1,024 |
medium | 10,240 |
high | 32,768 |
Relation to the library
The CLI is a thin wrapper (~500 lines) over the Agent and Thread APIs. Every config option maps directly to AgentOptions. If you need more control, use the library directly:
import { Agent, AiSdkProvider } from "noumen";
import { LocalSandbox } from "noumen/local";
import { createAnthropic } from "@ai-sdk/anthropic";
const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const code = new Agent({
provider: new AiSdkProvider({
model: anthropic("claude-opus-4.6"),
providerFamily: "anthropic",
cacheConfig: { enabled: true },
}),
sandbox: LocalSandbox({ cwd: process.cwd() }),
options: { projectContext: true, costTracking: { enabled: true } },
});