CLI

Run noumen as a terminal coding agent with any provider. Interactive REPL, one-shot mode, JSONL output, and session management.

noumen ships a CLI that wraps the library for direct terminal use. It auto-detects your provider from environment variables, loads .noumen/config.json for project settings, and reads NOUMEN.md files for project context.

Quick start

# Set an API key
export ANTHROPIC_API_KEY=sk-ant-...

# Interactive mode
npx noumen

# One-shot
npx noumen "Add error handling to server.ts"

# Explicit provider and model
npx noumen -p openai -m gpt-4o "Write tests for utils.ts"

Setup

Run noumen init to create a config file interactively:

npx noumen init

This creates .noumen/config.json and optionally a NOUMEN.md file in your project root.

Config file

Place .noumen/config.json at your project root (or any ancestor directory). The CLI walks up from the working directory to find it.

{
  "provider": "anthropic",
  "model": "claude-sonnet-4",
  "permissions": "acceptEdits",
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
    }
  },
  "lsp": {
    "typescript": {
      "command": "typescript-language-server",
      "args": ["--stdio"],
      "fileExtensions": [".ts", ".tsx"]
    }
  }
}

All fields are optional. The full set of config keys:

KeyTypeDescription
providerstringProvider name
modelstringModel identifier
apiKeystringAPI key (prefer env vars instead)
permissionsstringPermission mode
thinkingstringoff, low, medium, high
sandboxstringlocal, docker, e2b, sprites
mcpServersobjectMCP server configs (same as AgentOptions)
lspobjectLSP server configs
hooksarrayHook definitions
autoCompactbooleanEnable auto-compaction (default: true)
enableSubagentsbooleanEnable subagent tool
enableTasksbooleanEnable task management tools
enablePlanModebooleanEnable plan mode tools
webSearchobjectWeb search config
maxTurnsnumberDefault max agent turns
systemPromptstringCustom system prompt
sessionDirstringSession storage directory

Flags

Usage: noumen [options] [prompt...]

Options:
  -p, --provider <name>    openai | anthropic | gemini | openrouter | bedrock | vertex | ollama
  -m, --model <model>      Model name
  --api-key <key>          API key (overrides env vars)
  --base-url <url>         Override provider base URL
  --cwd <dir>              Working directory
  --permission <mode>      Permission mode (default, plan, acceptEdits, auto, bypassPermissions, dontAsk)
  --thinking <level>       off | low | medium | high
  --max-turns <n>          Max agent turns
  --no-sandbox                       Disable OS-level sandboxing (use UnsandboxedLocal)
  --sandbox-allow-write <paths>      Comma-separated paths allowed for writes in addition to cwd (LocalSandbox)
  --sandbox-allow-domain <domains>   Comma-separated domains allowed for network access in the sandbox
  --json                   Emit JSONL stream events to stdout
  --quiet                  Only output final text
  --verbose                Show tool calls and thinking
  --headless               NDJSON stdin/stdout protocol for programmatic control
  -c, --prompt <text>      One-shot prompt (non-interactive)

Commands:
  init                     Create .noumen/config.json
  sessions                 List past sessions
  resume <session-id>      Resume a previous session
  doctor                   Run health checks on provider, sandbox, MCP, and LSP

Flags override config file values.

API key resolution

The CLI resolves API keys in this order:

  1. --api-key flag
  2. Provider-specific env var: OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY
  3. NOUMEN_API_KEY (generic fallback)
  4. apiKey in .noumen/config.json

If no --provider is specified, the CLI auto-detects from whichever env var is set. Ollama, Bedrock, and Vertex do not require an API key.

Provider auto-detection

When no provider is specified, the CLI checks environment variables in order, then probes for a local Ollama server:

  1. ANTHROPIC_API_KEY → anthropic
  2. OPENAI_API_KEY → openai
  3. GEMINI_API_KEY → gemini
  4. OPENROUTER_API_KEY → openrouter
  5. AWS_ACCESS_KEY_ID or AWS_PROFILE → bedrock
  6. GOOGLE_APPLICATION_CREDENTIALS or GCLOUD_PROJECT → vertex
  7. OLLAMA_HOST → ollama
  8. Probe http://localhost:11434 → ollama (if reachable)

Interactive mode

When run without a prompt, the CLI enters an interactive REPL. The conversation persists across messages within the same session.

REPL commands

CommandDescription
/quit, /exitExit
/newStart a new conversation
/sessionShow current session ID
/sessionsList saved sessions
/costShow token usage and cost
/verboseToggle verbose output
/helpShow available commands

One-shot mode

Pass a prompt as an argument or with -c:

npx noumen "Refactor the auth module"
npx noumen -c "List all exported functions in src/"

Or pipe from stdin:

cat requirements.md | npx noumen -p anthropic
echo "Fix the failing test" | npx noumen

JSONL output

Use --json to emit every stream event as a JSON line to stdout. This is useful for piping to other tools or building custom UIs:

npx noumen --json -c "Describe the project structure" | jq '.type'

Use --quiet to suppress everything except the final text output:

RESULT=$(npx noumen --quiet -c "What does server.ts do?")

Headless mode

Use --headless for programmatic subprocess control. The agent communicates via bidirectional NDJSON over stdin/stdout:

npx noumen --headless -p anthropic -m claude-sonnet-4

The process emits {"type":"ready"} when initialized. Send commands as JSON lines on stdin:

{"type":"prompt","text":"Fix the failing test"}

Receive events as JSON lines on stdout:

{"type":"session_created","sessionId":"abc-123"}
{"type":"text_delta","text":"Looking at the test...","sessionId":"abc-123"}
{"type":"session_done","sessionId":"abc-123"}

This works from any language that can spawn a subprocess. See the Server API Reference for the full protocol and examples in Node.js and Python.

Session management

Sessions are saved as JSONL files in .noumen/sessions/ by default.

# List sessions
npx noumen sessions

# Resume by ID (prefix match supported)
npx noumen resume a1b2c3d4

Health checks

Run noumen doctor to verify your provider, sandbox, MCP servers, and LSP servers are all working:

npx noumen doctor

Example output:

noumen doctor

  ✓ Provider (claude-sonnet-4)  (342ms)
  ✓ Sandbox: filesystem  (2ms)
  ✓ Sandbox: shell  (45ms)
  ✓ MCP: filesystem  (120ms) connected, 5 tools
  ✓ LSP: typescript  running

  Overall: healthy

The command exits with code 0 when all core checks pass, or 1 otherwise. Useful in CI pipelines or setup scripts.

The same check is available programmatically via code.diagnose() — see Embedding for details.

Project context

The CLI automatically loads NOUMEN.md and CLAUDE.md files following the hierarchical convention:

  • ~/.noumen/NOUMEN.md — global user instructions
  • ./NOUMEN.md — project root
  • ./.noumen/rules/*.md — scoped rules
  • ./NOUMEN.local.md — gitignored local overrides

See Project Context for details.

Thinking levels

The --thinking flag maps to provider-specific extended thinking:

LevelBudget tokens
offDisabled
low1,024
medium10,240
high32,768

Relation to the library

The CLI is a thin wrapper (~500 lines) over the Agent and Thread APIs. Every config option maps directly to AgentOptions. If you need more control, use the library directly:

import { Agent, AiSdkProvider } from "noumen";
import { LocalSandbox } from "noumen/local";
import { createAnthropic } from "@ai-sdk/anthropic";

const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });

const code = new Agent({
  provider: new AiSdkProvider({
    model: anthropic("claude-opus-4.6"),
    providerFamily: "anthropic",
    cacheConfig: { enabled: true },
  }),
  sandbox: LocalSandbox({ cwd: process.cwd() }),
  options: { projectContext: true, costTracking: { enabled: true } },
});