Supported Providers
Anthropic (Cloud)
Best supported provider with full feature support:anthropic:claude-sonnet-4-5anthropic:claude-opus-4-5anthropic:claude-haiku-4-5
~/.mux/providers.jsonc or environment variables:
ANTHROPIC_API_KEYorANTHROPIC_AUTH_TOKEN— API key (required if not in providers.jsonc)ANTHROPIC_BASE_URL— Custom base URL (optional)
/v1 path suffix is normalized automatically—you can omit it from base URLs.
OpenAI (Cloud)
GPT-5 family of models:openai:gpt-5.1openai:gpt-5-proopenai:gpt-5.1-codexopenai:gpt-5.1-codex-mini
Google (Cloud)
Access Gemini models directly via Google’s generative AI API:google:gemini-3-pro-previewgoogle:gemini-2.5-progoogle:gemini-2.5-flash
- Get your API key from Google AI Studio
- Add to
~/.mux/providers.jsonc:
xAI (Grok)
Frontier reasoning models from xAI with built-in search orchestration:xai:grok-4-1-fast— Fast unified model (switches between reasoning/non-reasoning based on thinking toggle)xai:grok-code-fast-1— Optimized for coding tasks
- Create an API key at console.x.ai
- Add to
~/.mux/providers.jsonc:
mode: "auto" with citations. Add searchParameters to providers.jsonc if you want to customize the defaults (e.g., regional focus, time filters, or disabling search entirely per workspace).
OpenRouter (Cloud)
Access 300+ models from multiple providers through a single API:openrouter:z-ai/glm-4.6openrouter:anthropic/claude-3.5-sonnetopenrouter:google/gemini-2.0-flash-thinking-expopenrouter:deepseek/deepseek-chatopenrouter:openai/gpt-4o- Any model from OpenRouter Models
- Get your API key from openrouter.ai
- Add to
~/.mux/providers.jsonc:
~/.mux/providers.jsonc:
order: Array of provider names to try in priority order (e.g.,["Cerebras", "Fireworks"])allow_fallbacks: Boolean - whether to fall back to other providers (default:true)only: Array - restrict to only these providersignore: Array - exclude specific providersrequire_parameters: Boolean - only use providers supporting all your request parametersdata_collection:"allow"or"deny"- control whether providers can store/train on your data
- Off: No extended reasoning
- Low: Quick reasoning for straightforward tasks
- Medium: Standard reasoning for moderate complexity (default)
- High: Deep reasoning for complex problems
reasoning.effort and works with any reasoning-capable model. See OpenRouter Reasoning docs for details.
Ollama (Local)
Run models locally with Ollama. No API key required:ollama:gpt-oss:20bollama:gpt-oss:120bollama:qwen3-coder:30b- Any model from the Ollama Library
- Install Ollama from ollama.com
- Pull a model:
ollama pull gpt-oss:20b - That’s it! Ollama works out-of-the-box with no configuration needed.
http://localhost:11434/api. To use a remote instance or custom port, add to ~/.mux/providers.jsonc:
Amazon Bedrock (Cloud)
Access Anthropic Claude and other models through AWS Bedrock:bedrock:us.anthropic.claude-sonnet-4-20250514-v1:0bedrock:us.amazon.nova-pro-v1:0
[region.]vendor.model-name-version. mux automatically parses these for display (e.g., us.anthropic.claude-sonnet-4-20250514-v1:0 displays as “Sonnet 4”).
Authentication Options:
Bedrock supports multiple authentication methods, tried in order:
- Bearer Token (simplest) — A single API key for Bedrock access
- Explicit Credentials — Access Key ID + Secret Access Key in config
- AWS Credential Chain — Automatic credential resolution (recommended for AWS environments)
fromNodeProviderChain() which automatically resolves credentials from (in order):
- Environment variables —
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN - Shared credentials file —
~/.aws/credentials(supports profiles viaAWS_PROFILE) - SSO credentials — AWS IAM Identity Center (configure with
aws sso login) - EC2 instance profile — Automatic on EC2 instances with IAM roles
- ECS task role — Automatic in ECS containers
- EKS Pod Identity / IRSA — Automatic in Kubernetes with IAM Roles for Service Accounts
AWS_REGION and AWS_DEFAULT_REGION environment variables, so standard AWS CLI configurations work automatically.
This means if you’re already authenticated with AWS CLI (aws sso login or configured credentials), mux will automatically use those credentials:
bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream permissions for the models you want to use.
Provider Configuration
All providers are configured in~/.mux/providers.jsonc. Example configurations:
Model Selection
The quickest way to switch models is with the keyboard shortcut:- macOS:
Cmd+/ - Windows/Linux:
Ctrl+/
Cmd+Shift+P / Ctrl+Shift+P):
- Type “model”
- Select “Change Model”
- Choose from available models
provider:model-name