Skip to main content

Overview

After configuring your AI providers, you need to enable which models you want to use. cmd provides fine-grained control over:
  • Which models are available in the model selector
  • Which provider to use when a model is available from multiple sources
  • Default model preferences
You can configure models during onboarding or anytime through Settings > Models or Settings > AI Providers.

Enabling Models

1

Open Model Settings

Go to Settings > Models
2

Browse Available Models

You’ll see all models available from your configured providers
3

Enable Models

Toggle on the models you want to use. Enabled models will appear in the model selector when using cmd.
4

Set Preferences

If a model is available from multiple providers, select your preferred provider
You need at least one provider configured before you can enable models. See AI Providers to set up providers.

Managing Multiple Providers for the Same Model

Some models may be available from multiple providers. For example:
  • Claude Sonnet from both Anthropic and OpenRouter
  • GPT-4 from both OpenAI and Azure OpenAI
  • Popular models from both direct providers and aggregators

Setting Provider Preference

1

Open Model Settings

Navigate to Settings > Models
2

Find the Model

Locate the model that’s available from multiple providers
3

Select Provider

Choose which provider to use from the dropdown:
  • Anthropic
  • OpenRouter
  • OpenAI
  • etc.
4

Save Changes

Your preference is saved automatically
You might choose different providers based on:
  • Cost: Compare pricing across providers
  • Speed: Some providers may have faster response times
  • Reliability: Use a backup provider if your primary is having issues
  • Features: Some providers offer additional features

Switching Models During Use

Once you’ve enabled models, you can switch between them easily in the chat interface:
  1. Click the model selector at the top of the chat
  2. Choose from your enabled models
  3. Continue your conversation with the new model
The model selector shows only the models you’ve enabled in settings.
After changing model mid-conversation, the next message will not have a cached prompt