Skip to main content
Source: https://docs.datzi.ai/help/faq

FAQ

Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, multi-agent, OAuth/API keys, model failover). For runtime diagnostics, see Troubleshooting. For the full config reference, see Configuration.

Table of contents

First 60 seconds if something’s broken

  1. Quick status (first check)
    datzi status
    
    Fast local summary: OS + update, gateway/service reachability, agents/sessions, provider config + runtime issues ( when gateway is reachable).
  2. Pasteable report (safe to share)
    datzi status --all
    
    Read-only diagnosis with log tail (tokens redacted).
  3. Daemon + port state
    datzi gateway status
    
    Shows supervisor runtime vs RPC reachability, the probe target URL, and which config the service likely used.
  4. Deep probes
    datzi status --deep
    
    Runs gateway health checks + provider probes (requires a reachable gateway). See Health.
  5. Tail the latest log
    datzi logs --follow
    
    If RPC is down, fall back to:
    tail -f "$(ls -t /tmp/datzi/datzi-*.log | head -1)"
    
    File logs are separate from service logs; see Logging and Troubleshooting.
  6. Run the doctor (repairs)
    datzi doctor
    
    Repairs/migrates config/state + runs health checks. See Doctor.
  7. Gateway snapshot
    datzi health --json
    datzi health --verbose   # shows the target URL + config path on errors
    
    Asks the running gateway for a full snapshot (WS-only). See Health.

Quick start and first-run setup

Im stuck what’s the fastest way to get unstuck

Use a local AI agent that can see your machine. That is far more effective than asking in Discord, because most “I’m stuck” cases are local config or environment issues that remote helpers cannot inspect. These tools can read the repo, run commands, inspect logs, and help fix your machine-level setup (PATH, services, permissions, auth files). Give them the full source checkout via the hackable (git) install:
curl -fsSL https://datzi.ai/install.sh | bash -s -- --install-method git
This installs Datzi from a git checkout, so the agent can read the code + docs and reason about the exact version you are running. You can always switch back to stable later by re-running the installer without --install-method git. Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the necessary commands. That keeps changes small and easier to audit. If you discover a real bug or fix, please file a GitHub issue or send a PR: https://github.com/datzi/datzi/issues https://github.com/datzi/datzi/pulls Start with these commands (share outputs when asking for help):
datzi status
datzi models status
datzi doctor
What they do:
  • datzi status: quick snapshot of gateway/agent health + basic config.
  • datzi models status: checks provider auth + model availability.
  • datzi doctor: validates and repairs common config/state issues.
Other useful CLI checks: datzi status --all, datzi logs --follow, datzi gateway status, datzi health --verbose. Quick debug loop: First 60 seconds if something’s broken. Install docs: Install, Installer flags, Updating. The repo recommends running from source and using the onboarding wizard:
curl -fsSL https://datzi.ai/install.sh | bash
datzi onboard --install-daemon
The wizard can also build UI assets automatically. After onboarding, you typically run the Gateway on port 18789. From source (contributors/dev):
git clone https://github.com/datzi/datzi.git
cd datzi
pnpm install
pnpm build
pnpm ui:build # auto-installs UI deps on first run
datzi onboard
If you don’t have a global install yet, run it via pnpm datzi onboard.

How do I open the dashboard after onboarding

The wizard opens your browser with a clean (non-tokenized) dashboard URL right after onboarding and also prints the link in the summary. Keep that tab open; if it didn’t launch, copy/paste the printed URL on the same machine.

How do I authenticate the dashboard token on localhost vs remote

Localhost (same machine):
  • Open http://127.0.0.1:18789/.
  • If it asks for auth, paste the token from gateway.auth.token (or DATZI_GATEWAY_TOKEN) into Control UI settings.
  • Retrieve it from the gateway host: datzi config get gateway.auth.token (or generate one: datzi doctor --generate-gateway-token).
Not on localhost:
  • Tailscale Serve (recommended): keep bind loopback, run datzi gateway --tailscale serve, open https://<magicdns>/. If gateway.auth.allowTailscale is true, identity headers satisfy Control UI/WebSocket auth (no token, assumes trusted gateway host); HTTP APIs still require token/password.
  • Tailnet bind: run datzi gateway --bind tailnet --token "<token>", open http://<tailscale-ip>:18789/, paste token in dashboard settings.
  • SSH tunnel: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/ and paste the token in Control UI settings.
See Dashboard and Web surfaces for bind modes and auth details.

What runtime do I need

Node >= 22 is required. pnpm is recommended. Bun is not recommended for the Gateway.

Does it run on Raspberry Pi

Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it. If you want extra headroom (logs, media, other services), 2GB is recommended, but it’s not a hard minimum. Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes.

Any tips for Raspberry Pi installs

Short version: it works, but expect rough edges.
  • Use a 64-bit OS and keep Node >= 22.
  • Prefer the hackable (git) install so you can see logs and update fast.
  • Start without channels/skills, then add them one by one.
  • If you hit weird binary issues, it is usually an ARM compatibility problem.
Docs: Linux, Install.

It is stuck on wake up my friend onboarding will not hatch What now

That screen depends on the Gateway being reachable and authenticated. The TUI also sends “Wake up, my friend!” automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran.
  1. Restart the Gateway:
datzi gateway restart
  1. Check status + auth:
datzi status
datzi models status
datzi logs --follow
  1. If it still hangs, run:
datzi doctor
If the Gateway is remote, ensure the tunnel/Tailscale connection is up and that the UI is pointed at the right Gateway. See Remote access.

Can I migrate my setup to a new machine Mac mini without redoing onboarding

Yes. Copy the state directory and workspace, then run Doctor once. This keeps your bot “exactly the same” (memory, session history, auth, and channel state) as long as you copy both locations:
  1. Install Datzi on the new machine.
  2. Copy $DATZI_STATE_DIR (default: ~/.datzi) from the old machine.
  3. Copy your workspace (default: ~/.datzi/workspace).
  4. Run datzi doctor and restart the Gateway service.
That preserves config, auth profiles, WhatsApp creds, sessions, and memory. If you’re in remote mode, remember the gateway host owns the session store and workspace. Important: if you only commit/push your workspace to GitHub, you’re backing up memory + bootstrap files, but not session history or auth. Those live under ~/.datzi/ (for example ~/.datzi/agents/<agentId>/sessions/). Related: Migrating, Where things live on disk, Agent workspace, Doctor, Remote mode.

Where do I see what is new in the latest version

Check the GitHub changelog: https://github.com/datzi/datzi/blob/main/CHANGELOG.md Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).

I can’t access docs.datzi.ai SSL error What now

Some Comcast/Xfinity connections incorrectly block docs.datzi.ai via Xfinity Advanced Security. Disable it or allowlist docs.datzi.ai, then retry. More detail: Troubleshooting. Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status. If you still can’t reach the site, the docs are mirrored on GitHub: https://github.com/datzi/datzi/tree/main/docs

What’s the difference between stable and beta

Stable and beta are npm dist-tags, not separate code lines:
  • latest = stable
  • beta = early build for testing
We ship builds to beta, test them, and once a build is solid we promote that same version to latest. That’s why beta and stable can point at the same version. See what changed: https://github.com/datzi/datzi/blob/main/CHANGELOG.md

How do I install the beta version and what’s the difference between beta and dev

Beta is the npm dist-tag beta (may match latest). Dev is the moving head of main (git); when published, it uses the npm dist-tag dev. One-liners (macOS/Linux):
curl -fsSL --proto '=https' --tlsv1.2 https://datzi.ai/install.sh | bash -s -- --beta
curl -fsSL --proto '=https' --tlsv1.2 https://datzi.ai/install.sh | bash -s -- --install-method git
Windows installer (PowerShell): https://datzi.ai/install.ps1 More detail: Development channels and Installer flags.

How long does install and onboarding usually take

Rough guide:
  • Install: 2-5 minutes
  • Onboarding: 5-15 minutes depending on how many channels/models you configure
If it hangs, use Installer stuck and the fast debug loop in Im stuck.

How do I try the latest bits

Two options:
  1. Dev channel (git checkout):
datzi update --channel dev
This switches to the main branch and updates from source.
  1. Hackable install (from the installer site):
curl -fsSL https://datzi.ai/install.sh | bash -s -- --install-method git
That gives you a local repo you can edit, then update via git. If you prefer a clean clone manually, use:
git clone https://github.com/datzi/datzi.git
cd datzi
pnpm install
pnpm build
Docs: Update, Development channels, Install.

Installer stuck How do I get more feedback

Re-run the installer with verbose output:
curl -fsSL https://datzi.ai/install.sh | bash -s -- --verbose
Beta install with verbose:
curl -fsSL https://datzi.ai/install.sh | bash -s -- --beta --verbose
For a hackable (git) install:
curl -fsSL https://datzi.ai/install.sh | bash -s -- --install-method git --verbose
Windows (PowerShell) equivalent:
# install.ps1 has no dedicated -Verbose flag yet.
Set-PSDebug -Trace 1
& ([scriptblock]::Create((iwr -useb https://datzi.ai/install.ps1))) -NoOnboard
Set-PSDebug -Trace 0
More options: Installer flags.

Windows install says git not found or datzi not recognized

Two common Windows issues: 1) npm error spawn git / git not found
  • Install Git for Windows and make sure git is on your PATH.
  • Close and reopen PowerShell, then re-run the installer.
2) datzi is not recognized after install
  • Your npm global bin folder is not on PATH.
  • Check the path:
    npm config get prefix
    
  • Ensure <prefix>\\bin is on PATH (on most systems it is %AppData%\\npm).
  • Close and reopen PowerShell after updating PATH.
If you want the smoothest Windows setup, use WSL2 instead of native Windows. Docs: Windows.

The docs didn’t answer my question how do I get a better answer

Use the hackable (git) install so you have the full source and docs locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely.
curl -fsSL https://datzi.ai/install.sh | bash -s -- --install-method git
More detail: Install and Installer flags.

How do I install Datzi on Linux

Short answer: follow the Linux guide, then run the onboarding wizard.

How do I install Datzi on a VPS

Any Linux VPS works. Install on the server, then use SSH/Tailscale to reach the Gateway. Guides: exe.dev, Hetzner, Fly.io. Remote access: Gateway remote.

Where are the cloudVPS install guides

We keep a hosting hub with the common providers. Pick one and follow the guide: How it works in the cloud: the Gateway runs on the server, and you access it from your laptop/phone via the Control UI (or Tailscale/SSH). Your state + workspace live on the server, so treat the host as the source of truth and back it up. You can pair nodes (Mac/iOS/Android/headless) to that cloud Gateway to access local screen/camera/canvas or run commands on your laptop while keeping the Gateway in the cloud. Hub: Platforms. Remote access: Gateway remote. Nodes: Nodes, Nodes CLI.

Can I ask Datzi to update itself

Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation. Safer: run updates from a shell as the operator. Use the CLI:
datzi update
datzi update status
datzi update --channel stable|beta|dev
datzi update --tag <dist-tag|version>
datzi update --no-restart
If you must automate from an agent:
datzi update --yes --no-restart
datzi gateway restart
Docs: Update, Updating.

What does the onboarding wizard actually do

datzi onboard is the recommended setup path. In local mode it walks you through:
  • Model/auth setup (Ollama local models recommended, API keys optional for other providers)
  • Workspace location + bootstrap files
  • Gateway settings (bind/port/auth/tailscale)
  • Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
  • Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
  • Health checks and skills selection
It also warns if your configured model is unknown or missing auth.

Do I need a Claude or OpenAI subscription to run this

No. Datzi runs 100% free with Ollama local models - no API key or subscription required. Your data stays on your device. Docs: Ollama, Local models, Models.

Is AWS Bedrock supported

Yes, via manual config with AWS credentials. For a free local setup, use Ollama instead.

How does Codex auth work

Codex OAuth supports OpenAI Codex paid subscriptions. For free local inference, use Ollama instead.

Do you support OpenAI subscription auth Codex OAuth

Codex OAuth is supported for paid OpenAI subscriptions. For free local inference, use Ollama instead.

Is a local model OK for casual chats

Yes, with a sufficiently large local model. Datzi works well with Ollama models like qwen3-coder:32b or deepseek-r1:32b. Use the largest model your hardware supports. See Security. More context: Models.

Can I use selfhosted models llamacpp vLLM Ollama

Yes. If your local server exposes an OpenAI-compatible API, you can point a custom provider at it. Ollama is supported directly and is the easiest path. Security note: smaller or heavily quantized models are more vulnerable to prompt injection. We strongly recommend large models for any bot that can use tools. If you still want small models, enable sandboxing and strict tool allowlists. Docs: Ollama, Local models, Model providers, Security, Sandboxing.

How do I switch models without wiping my config

Use model commands or edit only the model fields. Avoid full config replaces. Safe options:
  • /model in chat (quick, per-session)
  • datzi models set ... (updates just model config)
  • datzi configure --section model (interactive)
  • edit agents.defaults.model in ~/.datzi/datzi.json
Avoid config.apply with a partial object unless you intend to replace the whole config. If you did overwrite config, restore from backup or re-run datzi doctor to repair. Docs: Models, Configure, Config, Doctor.

What do Datzi, Flawd, and Krill use for models

  • Datzi + Flawd: Ollama (ollama/qwen3-coder:32b) - see Ollama.
  • Krill: MiniMax M2.1 (minimax/MiniMax-M2.1) - see MiniMax.

How do I switch models on the fly without restarting

Use the /model command as a standalone message:
/model sonnet
/model haiku
/model opus
/model gpt
/model ollama/qwen3-coder:32b
/model gemini
/model gemini-flash
You can list available models with /model, /model list, or /model status. /model (and /model list) shows a compact, numbered picker. Select by number:
/model 3
You can also force a specific auth profile for the provider (per session):
/model opus@anthropic:default
/model opus@anthropic:work
Tip: /model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next. It also shows the configured provider endpoint (baseUrl) and API mode (api) when available. How do I unpin a profile I set with profile Re-run /model without the @profile suffix:
/model ollama/qwen3-coder:32b
If you want to return to the default, pick it from /model (or send /model <default provider/model>). Use /model status to confirm which auth profile is active.

Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding

Yes. Set one as default and switch as needed:
  • Quick switch (per session): /model ollama/qwen3-coder:32b for daily tasks, /model ollama/qwen3-coder:32b for coding.
  • Default + switch: set agents.defaults.model.primary to ollama/deepseek-r1:32b, then switch to ollama/qwen3-coder:32b when coding (or the other way around).
  • Sub-agents: route coding tasks to sub-agents with a different default model.
See Models and Slash commands.

Why do I see Model is not allowed and then no reply

If agents.defaults.models is set, it becomes the allowlist for /model and any session overrides. Choosing a model that isn’t in that list returns:
Model "provider/model" is not allowed. Use /model to list available models.
That error is returned instead of a normal reply. Fix: add the model to agents.defaults.models, remove the allowlist, or pick a model from /model list.

Why do I see Unknown model minimaxMiniMaxM21

This means the provider isn’t configured (no MiniMax provider config or auth profile was found), so the model can’t be resolved. A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix checklist:
  1. Upgrade to 2026.1.12 (or run from source main), then restart the gateway.
  2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key exists in env/auth profiles so the provider can be injected.
  3. Use the exact model id (case-sensitive): minimax/MiniMax-M2.1 or minimax/MiniMax-M2.1-lightning.
  4. Run:
    datzi models list
    
    and pick from the list (or /model list in chat).
See MiniMax and Models.

Can I use MiniMax as my default and OpenAI for complex tasks

Yes. Use MiniMax as the default and switch models per session when needed. Fallbacks are for errors, not “hard tasks,” so use /model or a separate agent. Option A: switch per session
{
  env: {
    MINIMAX_API_KEY: 'sk-...'
  },
  agents: {
    defaults: {
      model: {
        primary: 'minimax/MiniMax-M2.1'
      },
      models: {
        'minimax/MiniMax-M2.1': {
          alias: 'minimax'
        },
        'ollama/deepseek-r1:32b': {
          alias: 'gpt'
        }
      }
    }
  }
}
Then:
/model gpt
Option B: separate agents
  • Agent A default: MiniMax
  • Agent B default: OpenAI
  • Route by agent or use /agent to switch
Docs: Models, Multi-Agent Routing, MiniMax, OpenAI.

Are opus sonnet gpt builtin shortcuts

Yes. Datzi ships a few default shorthands (only applied when the model exists in agents.defaults.models):
  • ollamaollama/qwen3-coder:32b
  • deepseekollama/deepseek-r1:32b
If you set your own alias with the same name, your value wins.

How do I defineoverride model shortcuts aliases

Aliases come from agents.defaults.models.<modelId>.alias. Example:
{
  agents: {
    defaults: {
      model: {
        primary: 'ollama/qwen3-coder:32b'
      },
      models: {
        'ollama/qwen3-coder:32b': {
          alias: 'opus'
        },
        'ollama/qwen3-coder:14b': {
          alias: 'sonnet'
        },
        'ollama/qwen3-coder:14b': {
          alias: 'haiku'
        }
      }
    }
  }
}
Then /model sonnet (or /<alias> when supported) resolves to that model ID.

How do I add models from other providers like OpenRouter or ZAI

Z.AI (GLM models):
{
  agents: {
    defaults: {
      model: {
        primary: 'zai/glm-4.7'
      },
      models: {
        'zai/glm-4.7': {}
      }
    }
  },
  env: {
    ZAI_API_KEY: '...'
  }
}
If you reference a provider/model but the required provider key is missing, you’ll get a runtime auth error (e.g. No API key found for provider "zai"). No API key found for provider after adding a new agent This usually means the new agent has an empty auth store. Auth is per-agent and stored in:
~/.datzi/agents/<agentId>/agent/auth-profiles.json
Fix options:
  • Run datzi agents add <id> and configure auth during the wizard.
  • Or copy auth-profiles.json from the main agent’s agentDir into the new agent’s agentDir.
Do not reuse agentDir across agents; it causes auth/session collisions.

Model failover and “All models failed”

How does failover work

Failover happens in two stages:
  1. Auth profile rotation within the same provider.
  2. Model fallback to the next model in agents.defaults.model.fallbacks.
Cooldowns apply to failing profiles (exponential backoff), so Datzi can keep responding even when a provider is rate-limited or temporarily failing.

What does this error mean

No credentials found for profile "anthropic:default"
It means the system attempted to use the auth profile ID anthropic:default, but could not find credentials for it in the expected auth store.

Why did it also try Google Gemini and fail

If your model config includes Google Gemini as a fallback (or you switched to a Gemini shorthand), Datzi will try it during model fallback. If you haven’t configured Google credentials, you’ll seeNo API key found for provider "google". Fix: either provide Google auth, or remove/avoid Google models in agents.defaults.model.fallbacks / aliases so fallback doesn’t route there. LLM request rejected message thinking signature required google antigravity Cause: the session history contains thinking blocks without signatures (often from an aborted/partial stream). Google Antigravity requires signatures for thinking blocks. Fix: Datzi now strips unsigned thinking blocks for Google Antigravity Claude. If it still appears, start a **new session ** or set /thinking off for that agent.

Auth profiles: what they are and how to manage them

Related: /concepts/oauth (OAuth flows, token storage, multi-account patterns)

What is an auth profile

An auth profile is a named credential record (OAuth or API key) tied to a provider. Profiles live in:
~/.datzi/agents/<agentId>/agent/auth-profiles.json

What are typical profile IDs

Datzi uses provider-prefixed IDs like:
  • anthropic:default (common when no email identity exists)
  • anthropic:<email> for OAuth identities
  • custom IDs you choose (e.g. anthropic:work)

Can I control which auth profile is tried first

Yes. Config supports optional metadata for profiles and an ordering per provider (auth.order.<provider>). This does * *not** store secrets; it maps IDs to provider/mode and sets rotation order. Datzi may temporarily skip a profile if it’s in a short cooldown (rate limits/timeouts/auth failures) or a longer * _disabled** state (billing/insufficient credits). To inspect this, run datzi models status --json and check auth.unusableProfiles. Tuning: auth.cooldowns.billingBackoffHours_. You can also set a per-agent order override (stored in that agent’s auth-profiles.json) via the CLI:
# Defaults to the configured default agent (omit --agent)
datzi models auth order get --provider anthropic

# Lock rotation to a single profile (only try this one)
datzi models auth order set --provider anthropic anthropic:default

# Or set an explicit order (fallback within provider)
datzi models auth order set --provider anthropic anthropic:work anthropic:default

# Clear override (fall back to config auth.order / round-robin)
datzi models auth order clear --provider anthropic
To target a specific agent:
datzi models auth order set --provider anthropic --agent main anthropic:default

OAuth vs API key what’s the difference

Datzi supports both:
  • OAuth often leverages subscription access (where applicable).
  • API keys use pay-per-token billing.
The wizard explicitly supports Anthropic setup-token and OpenAI Codex OAuth and can store API keys for you.

Gateway: ports, “already running”, and remote mode

What port does the Gateway use

gateway.port controls the single multiplexed port for WebSocket + HTTP (Control UI, hooks, etc.). Precedence:
--port > DATZI_GATEWAY_PORT > gateway.port > default 18789

Why does datzi gateway status say Runtime running but RPC probe failed

Because “running” is the supervisor’s view (launchd/systemd/schtasks). The RPC probe is the CLI actually connecting to the gateway WebSocket and calling status. Use datzi gateway status and trust these lines:
  • Probe target: (the URL the probe actually used)
  • Listening: (what’s actually bound on the port)
  • Last gateway error: (common root cause when the process is alive but the port isn’t listening)

Why does datzi gateway status show Config cli and Config service different

You’re editing one config file while the service is running another (often a --profile / DATZI_STATE_DIR mismatch). Fix:
datzi gateway install --force
Run that from the same --profile / environment you want the service to use.

What does another gateway instance is already listening mean

Datzi enforces a runtime lock by binding the WebSocket listener immediately on startup (default ws://127.0.0.1:18789). If the bind fails with EADDRINUSE, it throws GatewayLockError indicating another instance is already listening. Fix: stop the other instance, free the port, or run with datzi gateway --port <port>.

How do I run Datzi in remote mode client connects to a Gateway elsewhere

Set gateway.mode: "remote" and point to a remote WebSocket URL, optionally with a token/password:
{
  gateway: {
    mode: 'remote',
    remote: {
      url: 'ws://gateway.tailnet:18789',
      token: 'your-token',
      password: 'your-password'
    }
  }
}
Notes:
  • datzi gateway only starts when gateway.mode is local (or you pass the override flag).
  • The macOS app watches the config file and switches modes live when these values change.

The Control UI says unauthorized or keeps reconnecting What now

Your gateway is running with auth enabled (gateway.auth.*), but the UI is not sending the matching token/password. Facts (from code):
  • The Control UI stores the token in browser localStorage key datzi.control.settings.v1.
Fix:
  • Fastest: datzi dashboard (prints + copies the dashboard URL, tries to open; shows SSH hint if headless).
  • If you don’t have a token yet: datzi doctor --generate-gateway-token.
  • If remote, tunnel first: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/.
  • Set gateway.auth.token (or DATZI_GATEWAY_TOKEN) on the gateway host.
  • In the Control UI settings, paste the same token.
  • Still stuck? Run datzi status --all and follow Troubleshooting. See Dashboard for auth details.

I set gatewaybind tailnet but it can’t bind nothing listens

tailnet bind picks a Tailscale IP from your network interfaces (100.64.0.0/10). If the machine isn’t on Tailscale (or the interface is down), there’s nothing to bind to. Fix:
  • Start Tailscale on that host (so it has a 100.x address), or
  • Switch to gateway.bind: "loopback" / "lan".
Note: tailnet is explicit. auto prefers loopback; use gateway.bind: "tailnet" when you want a tailnet-only bind.

Can I run multiple Gateways on the same host

Usually no - one Gateway can run multiple messaging channels and agents. Use multiple Gateways only when you need redundancy (ex: rescue bot) or hard isolation. Yes, but you must isolate:
  • DATZI_CONFIG_PATH (per-instance config)
  • DATZI_STATE_DIR (per-instance state)
  • agents.defaults.workspace (workspace isolation)
  • gateway.port (unique ports)
Quick setup (recommended):
  • Use datzi --profile <name> … per instance (auto-creates ~/.datzi-<name>).
  • Set a unique gateway.port in each profile config (or pass --port for manual runs).
  • Install a per-profile service: datzi --profile <name> gateway install.
Profiles also suffix service names (bot.molt.<profile>; legacy com.datzi.*, datzi-gateway-<profile>.service, Datzi Gateway (<profile>)). Full guide: Multiple gateways.

What does invalid handshake code 1008 mean

The Gateway is a WebSocket server, and it expects the very first message to be a connect frame. If it receives anything else, it closes the connection with code 1008 (policy violation). Common causes:
  • You opened the HTTP URL in a browser (http://...) instead of a WS client.
  • You used the wrong port or path.
  • A proxy or tunnel stripped auth headers or sent a non-Gateway request.
Quick fixes:
  1. Use the WS URL: ws://<host>:18789 (or wss://... if HTTPS).
  2. Don’t open the WS port in a normal browser tab.
  3. If auth is on, include the token/password in the connect frame.
If you’re using the CLI or TUI, the URL should look like:
datzi tui --url ws://<host>:18789 --token <token>
Protocol details: Gateway protocol.

Logging and debugging

Where are logs

File logs (structured):
/tmp/datzi/datzi-YYYY-MM-DD.log
You can set a stable path via logging.file. File log level is controlled by logging.level. Console verbosity is controlled by --verbose and logging.consoleLevel. Fastest log tail:
datzi logs --follow
Service/supervisor logs (when the gateway runs via launchd/systemd):
  • macOS: $DATZI_STATE_DIR/logs/gateway.log and gateway.err.log (default: ~/.datzi/logs/...; profiles use ~/.datzi-<profile>/logs/...)
  • Linux: journalctl --user -u datzi-gateway[-<profile>].service -n 200 --no-pager
  • Windows: schtasks /Query /TN "Datzi Gateway (<profile>)" /V /FO LIST
See Troubleshooting for more.

How do I start/stop/restart the Gateway service

Use the gateway helpers:
datzi gateway status
datzi gateway restart
If you run the gateway manually, datzi gateway --force can reclaim the port. See Gateway.

I closed my terminal on Windows how do I restart Datzi

There are two Windows install modes: 1) WSL2 (recommended): the Gateway runs inside Linux. Open PowerShell, enter WSL, then restart:
wsl
datzi gateway status
datzi gateway restart
If you never installed the service, start it in the foreground:
datzi gateway run
2) Native Windows (not recommended): the Gateway runs directly in Windows. Open PowerShell and run:
datzi gateway status
datzi gateway restart
If you run it manually (no service), use:
datzi gateway run
Docs: Windows (WSL2), Gateway service runbook.

The Gateway is up but replies never arrive What should I check

Start with a quick health sweep:
datzi status
datzi models status
datzi channels status
datzi logs --follow
Common causes:
  • Model auth not loaded on the gateway host (check models status).
  • Channel pairing/allowlist blocking replies (check channel config + logs).
  • WebChat/Dashboard is open without the right token.
If you are remote, confirm the tunnel/Tailscale connection is up and that the Gateway WebSocket is reachable. Docs: Channels, Troubleshooting, Remote access.

Disconnected from gateway no reason what now

This usually means the UI lost the WebSocket connection. Check:
  1. Is the Gateway running? datzi gateway status
  2. Is the Gateway healthy? datzi status
  3. Does the UI have the right token? datzi dashboard
  4. If remote, is the tunnel/Tailscale link up?
Then tail logs:
datzi logs --follow
Docs: Dashboard, Remote access, Troubleshooting.

Telegram setMyCommands fails with network errors What should I check

Start with logs and channel status:
datzi channels status
datzi channels logs --channel telegram
If you are on a VPS or behind a proxy, confirm outbound HTTPS is allowed and DNS works. If the Gateway is remote, make sure you are looking at logs on the Gateway host. Docs: Telegram, Channel troubleshooting.

TUI shows no output What should I check

First confirm the Gateway is reachable and the agent can run:
datzi status
datzi models status
datzi logs --follow
In the TUI, use /status to see the current state. If you expect replies in a chat channel, make sure delivery is enabled (/deliver on). Docs: TUI, Slash commands.

How do I completely stop then start the Gateway

If you installed the service:
datzi gateway stop
datzi gateway start
This stops/starts the supervised service (launchd on macOS, systemd on Linux). Use this when the Gateway runs in the background as a daemon. If you’re running in the foreground, stop with Ctrl-C, then:
datzi gateway run
Docs: Gateway service runbook.

ELI5 datzi gateway restart vs datzi gateway

  • datzi gateway restart: restarts the background service (launchd/systemd).
  • datzi gateway: runs the gateway in the foreground for this terminal session.
If you installed the service, use the gateway commands. Use datzi gateway when you want a one-off, foreground run.

What’s the fastest way to get more details when something fails

Start the Gateway with --verbose to get more console detail. Then inspect the log file for channel auth, model routing, and RPC errors.

Media and attachments

My skill generated an imagePDF but nothing was sent

Outbound attachments from the agent must include a MEDIA:<path-or-url> line (on its own line). See Datzi assistant setup and Agent send. CLI sending:
datzi message send --target +15555550123 --message "Here you go" --media /path/to/file.png
Also check:
  • The target channel supports outbound media and isn’t blocked by allowlists.
  • The file is within the provider’s size limits (images are resized to max 2048px).
See Images.

Security and access control

Is it safe to expose Datzi to inbound DMs

Treat inbound DMs as untrusted input. Defaults are designed to reduce risk:
  • Default behavior on DM-capable channels is pairing:
    • Unknown senders receive a pairing code; the bot does not process their message.
    • Approve with: datzi pairing approve <channel> <code>
    • Pending requests are capped at 3 per channel; check datzi pairing list <channel> if a code didn’t arrive.
  • Opening DMs publicly requires explicit opt-in (dmPolicy: "open" and allowlist "*").
Run datzi doctor to surface risky DM policies.

Is prompt injection only a concern for public bots

No. Prompt injection is about untrusted content, not just who can DM the bot. If your assistant reads external content (web search/fetch, browser pages, emails, docs, attachments, pasted logs), that content can include instructions that try to hijack the model. This can happen even if you are the only sender. The biggest risk is when tools are enabled: the model can be tricked into exfiltrating context or calling tools on your behalf. Reduce the blast radius by:
  • using a read-only or tool-disabled “reader” agent to summarize untrusted content
  • keeping web_search / web_fetch / browser off for tool-enabled agents
  • sandboxing and strict tool allowlists
Details: Security.

Should my bot have its own email GitHub account or phone number

Yes, for most setups. Isolating the bot with separate accounts and phone numbers reduces the blast radius if something goes wrong. This also makes it easier to rotate credentials or revoke access without impacting your personal accounts. Start small. Give access only to the tools and accounts you actually need, and expand later if required. Docs: Security, Pairing.

Can I give it autonomy over my text messages and is that safe

We do not recommend full autonomy over your personal messages. The safest pattern is:
  • Keep DMs in pairing mode or a tight allowlist.
  • Use a separate number or account if you want it to message on your behalf.
  • Let it draft, then approve before sending.
If you want to experiment, do it on a dedicated account and keep it isolated. See Security.

Can I use cheaper models for personal assistant tasks

Yes, if the agent is chat-only and the input is trusted. Smaller tiers are more susceptible to instruction hijacking, so avoid them for tool-enabled agents or when reading untrusted content. If you must use a smaller model, lock down tools and run inside a sandbox. See Security.

I ran start in Telegram but didn’t get a pairing code

Pairing codes are sent only when an unknown sender messages the bot and dmPolicy: "pairing" is enabled. /start by itself doesn’t generate a code. Check pending requests:
datzi pairing list telegram
If you want immediate access, allowlist your sender id or set dmPolicy: "open" for that account.

WhatsApp will it message my contacts How does pairing work

No. Default WhatsApp DM policy is pairing. Unknown senders only get a pairing code and their message is not processed. Datzi only replies to chats it receives or to explicit sends you trigger. Approve pairing with:
datzi pairing approve whatsapp <code>
List pending requests:
datzi pairing list whatsapp
Wizard phone number prompt: it’s used to set your allowlist/owner so your own DMs are permitted. It’s not used for auto-sending. If you run on your personal WhatsApp number, use that number and enable channels.whatsapp.selfChatMode.

Chat commands, aborting tasks, and “it won’t stop”

How do I stop internal system messages from showing in chat

Most internal or tool messages only appear when verbose or reasoning is enabled for that session. Fix in the chat where you see it:
/verbose off
/reasoning off
If it is still noisy, check the session settings in the Control UI and set verbose to inherit. Also confirm you are not using a bot profile with verboseDefault set to on in config. Docs: Thinking and verbose, Security.

How do I stopcancel a running task

Send any of these as a standalone message (no slash):
stop
abort
esc
wait
exit
interrupt
These are abort triggers (not slash commands). For background processes (from the exec tool), you can ask the agent to run:
process action:kill sessionId:XXX
Slash commands overview: see Slash commands. Most commands must be sent as a standalone message that starts with /, but a few shortcuts (like /status) also work inline for allowlisted senders.

How do I send a Discord message from Telegram Crosscontext messaging denied

Datzi blocks cross-provider messaging by default. If a tool call is bound to Telegram, it won’t send to Discord unless you explicitly allow it. Enable cross-provider messaging for the agent:
{
  agents: {
    defaults: {
      tools: {
        message: {
          crossContext: {
            allowAcrossProviders: true,
            marker: {
              enabled: true,
              prefix: '[from {channel}] '
            }
          }
        }
      }
    }
  }
}
Restart the gateway after editing config. If you only want this for a single agent, set it under agents.list[].tools.message instead.

Why does it feel like the bot ignores rapidfire messages

Queue mode controls how new messages interact with an in-flight run. Use /queue to change modes:
  • steer - new messages redirect the current task
  • followup - run messages one at a time
  • collect - batch messages and reply once (default)
  • steer-backlog - steer now, then process backlog
  • interrupt - abort current run and start fresh
You can add options like debounce:2s cap:25 drop:summarize for followup modes.

Answer the exact question from the screenshot/chat log

Q: “What’s the default model for Anthropic with an API key?” A: In Datzi, the default model is whatever you configure in agents.defaults.model.primary (for example, ollama/qwen3-coder:32b). For Ollama, no API key is required.
Still stuck? Ask in Discord or open a GitHub discussion.

Help