AI agents

The four specialist agents that run inside Humagician — what each does, and where to configure them.

Humagician runs four specialist AI agents. They share one underlying model (Claude Sonnet) but each has its own job, prompt, and tools. You'll see them as separate sections in the left rail under Agents.

The four roles

Plus one operator-facing agent:

Huey

Lives inside app.humagician.com. Helps your team — not your shoppers — configure the platform, look up Shopify data, and navigate the app.

Why four instead of one

Each agent has a different system prompt, tool set, and policy boundary:

  • Responder is fast and conservative — short context, escalates on uncertainty.
  • Librarian has more context, runs on a schedule (not per-conversation), and can write back to your KB.
  • Analyst runs over historical data — never touches a live conversation.
  • Automator has tool access to your integrations (Shopify, Slack, webhooks) — guarded by per-tool approval rules.

Splitting them lets you tune each one separately. You can disable agents you don't need (e.g. start with just Responder, add Automator later).

Configuring agents

Each agent has its own settings page. Common controls:

  • Enabled / disabled — toggle the whole agent
  • Mode — for Responder: Auto-respond (sends directly) or Hybrid (drafts a suggestion you approve)
  • Instructions — extra rules layered on top of the system prompt
  • Tools — which integrations the agent can call (Shopify, Gorgias, etc.)
  • Knowledge access — which documents the agent reads

Find these on each agent's section in the left rail.

What runs the model

Under the hood, every agent uses the same Anthropic Claude Sonnet model. The provider (Azure AI Foundry vs. AWS Bedrock) is set per Humagician deployment in Super Admin → AI Provider by your account admin. Failover between providers is automatic on rate-limits — operators won't see it.

If you're a super admin and want details, see the Sonnet provider config section.

Token budgets

Each agent has a per-call token budget (the max_tokens for replies). Defaults are sensible; admins can tune them in Super Admin → AI Provider → Token Targets. Tighter budgets = lower cost + faster replies but less room for nuance.

Tools the agents share

All agents talk to your integrations through BRAM, our MCP gateway. BRAM enforces per-tool policies (read-only? approval required? blocked?), audits every call, and routes per-tenant — so your Shopify token never leaves your tenant boundary even when an agent calls Shopify.

The current tool surface:

  • Shopify — read orders, customers, products; with approval: refunds, edits
  • Helpdesks — Gorgias, Zendesk, Freshdesk, Intercom (when connected)
  • Custom webhooks — your own endpoints, defined in Operations → Configuration

Adding a new tool means adding it to your BRAM agent's allowed-servers list. See Operations → Integrations.

On this page