Overview
After configuring your AI providers, you need to enable which models you want to use. cmd provides fine-grained control over:- Which models are available in the model selector
- Which provider to use when a model is available from multiple sources
- Default model preferences
- Local models from Ollama for private, offline inference
Enabling Models
Enable Models
Toggle on the models you want to use. Enabled models will appear in the model selector when using cmd.
You need at least one provider configured before you can enable models. See AI Providers to set up providers.
Managing Multiple Providers for the Same Model
Some models may be available from multiple providers. For example:- Claude Sonnet from both Anthropic and OpenRouter
- GPT-4 from both OpenAI and Azure OpenAI
- Popular models from both direct providers and aggregators
- Local models from Ollama vs. cloud providers
Setting Provider Preference
Switching Models During Use
Once you’ve enabled models, you can switch between them easily in the chat interface:- Click the model selector at the top of the chat
- Choose from your enabled models
- Continue your conversation with the new model
Local Models
For complete privacy and offline development, you can use local models through Ollama. Once you’ve configured the Ollama provider, all models installed in your Ollama instance will automatically appear in the model list.Refresh Model List
If you’ve added new models to Ollama, refresh cmd’s model list by disabling and re-enabling the Ollama provider in Settings > AI Providers

