The CLI is designed for automation and scripting (CI/CD pipelines, batch processing,
programmatic control). For interactive terminal experiences, consider tools like Claude
Code or similar TUIs.
mux run executes a single request to completion and exits.
GitHub Actions Guide
Learn how to use
mux run in CI/CD pipelinesInstallation
The CLI is available via npm and can be run directly withnpx:
npx mux is especially convenient for CI/CD pipelines where you don’t want to manage a global installation.
mux run
Execute a one-off agent task:
Options
| Option | Short | Description | Default |
|---|---|---|---|
--dir <path> | -d | Project directory | Current directory |
--model <model> | -m | Model to use (e.g., anthropic:claude-sonnet-4-5) | Default model |
--runtime <runtime> | -r | Runtime: local, worktree, ssh <host>, or docker <image> | local |
--mode <mode> | Agent mode: plan or exec | exec | |
--thinking <level> | -t | Thinking level: OFF, LOW, MED, HIGH, MAX, or 0–9 (model-relative, see Models) | MED |
--budget <usd> | -b | Stop when session cost exceeds budget (USD) | No limit |
--experiment <id> | -e | Enable experiment (repeatable) | None |
--json | Output NDJSON for programmatic use | Off | |
--quiet | -q | Only output final result | Off |
Runtimes
local(default): Runs directly in the specified directory. Best for one-off tasks.worktree: Creates an isolated git worktree under~/.mux/src. Useful for parallel work.ssh <host>: Runs on a remote machine via SSH. Example:--runtime "ssh user@myserver.com"docker <image>: Runs in a Docker container. Example:--runtime "docker node:20"
Output Modes
- Default (TTY): Human-readable streaming with tool call formatting
--json: NDJSON streaming - each line is a JSON object with event data--quiet: Suppresses streaming output, only shows final assistant response
Examples
mux server
Start the HTTP/WebSocket server for remote access (for example, from a phone or another machine):
--host <host>- Host/interface to bind to (default:localhost)--port <port>- Port to bind to (default:3000)--auth-token <token>- Bearer token for HTTP/WS auth--no-auth- Disable authentication entirely--print-auth-token- Always print the auth token on startup--allow-http-origin- Accept HTTPS browser origins when a TLS-terminating proxy forwardsX-Forwarded-Proto=http--ssh-host <host>- SSH hostname/alias used for editor deep links in browser mode--add-project <path>- Add and open project at the specified path
--allow-http-origin only when HTTPS is terminated by an upstream reverse proxy and mux receives rewritten X-Forwarded-Proto=http headers. This compatibility mode is disabled by default. For non-CLI server starts (for example desktop/browser mode), set MUX_SERVER_ALLOW_HTTP_ORIGIN=1 to opt in.
Auth token precedence:
--no-auth--auth-tokenMUX_SERVER_AUTH_TOKEN- Auto-generated token
MUX_SERVER_AUTH_GITHUB_OWNER or serverAuthGithubOwner in ~/.mux/config.json. See Server Access.
mux acp
Start the ACP (Agent-Client Protocol) stdio bridge used by editor integrations:
--server-url <url>- URL of a running mux server--auth-token <token>- Optional bearer token for authenticated server connections--log-file <path>- Write ACP logs to a file instead of console stderr (helpful when your editor hides subprocess logs)
MUX_SERVER_URL- Same as--server-urlMUX_SERVER_AUTH_TOKEN- Same as--auth-token
MUX_SERVER_AUTH_TOKEN for ACP auth; MUX_AUTH_TOKEN is not read by mux acp.
Set up Zed
Add a custom agent server in Zed that runs the ACP subcommand. Ifmux is installed globally:
Connect Zed to a remote mux server
You can point ACP at a remote mux server either with flags or environment variables. Using flags:mux acp process):
mux desktop
Launch the desktop app. This is automatically invoked when running the packaged app or via electron .:
mux with no arguments under Electron, the desktop app launches automatically.
mux --version
Print the version and git commit:
Debug Environment Variables
These environment variables help diagnose issues with LLM requests and responses.| Variable | Purpose |
|---|---|
MUX_DEBUG_LLM_REQUEST | Set to 1 to log the complete LLM request (system prompt, messages, tools, provider options) as formatted JSON to the debug logs. Useful for diagnosing prompt issues. |
systemMessage: The full system prompt sent to the modelmessages: All conversation messages in the requesttools: Tool definitions with descriptions and input schemasproviderOptions: Provider-specific options (thinking level, etc.)mode,thinkingLevel,maxOutputTokens,toolPolicy