Model providers
This page covers LLM/model providers (not chat channels like WhatsApp/Telegram). For model selection rules, see /concepts/models.Quick rules
- Model refs use
provider/model(example:ollama/qwen3-coder:32b). - If you set
agents.defaults.models, it becomes the allowlist. - CLI helpers:
datzi onboard,datzi models list,datzi models set <provider/model>.
Datzi runs on Ollama (qwen3-coder:14b). No external API keys required.
Built-in providers
Datzi runs on Ollama (qwen3-coder:14b). No external API keys required.
Providers via models.providers (custom/base URL)
Datzi runs on Ollama (qwen3-coder:14b). No external API keys required.
Ollama (recommended)
Ollama is a local LLM runtime that provides an OpenAI-compatible API:- Provider:
ollama - Auth: None required (local server)
- Example model:
ollama/qwen3-coder:14b - Installation: https://ollama.ai
http://127.0.0.1:11434/v1.
See Local models for model recommendations and custom configuration.
vLLM
vLLM is a local (or self-hosted) OpenAI-compatible server:- Provider:
vllm - Auth: Optional (depends on your server)
- Default base URL:
http://127.0.0.1:8000/v1
/v1/models):
Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAI‑compatible):- For custom providers,
reasoning,input,cost,contextWindow, andmaxTokensare optional. When omitted, Datzi defaults to:reasoning: falseinput: ["text"]cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }contextWindow: 200000maxTokens: 8192
- Recommended: set explicit values that match your proxy/model limits.
