Quick Start
Get Protege running and send your first email to an AI agent in about 5 minutes.
Prerequisites
- Node.js 18+ (Protege uses ESM modules)
- npm
- An API key from at least one LLM provider: OpenAI, Anthropic, Google Gemini, or Grok
Step 1: Install Protege
npm install -g protege-toolkit@alphaVerify the installation:
protege --versionStep 2: Create a Project
A Protege workspace is a directory that holds your configs, extensions, personas, and memory. Create one and run the guided setup:
mkdir protege-hq && cd protege-hq
protege setupThe setup wizard walks you through:
- LLM provider — which provider to use for inference (OpenAI, Anthropic, Gemini, or Grok)
- API key — your provider's API key, stored in a local
.secretsfile - Outbound mode —
relay(recommended) orlocalSMTP - Relay URL — if using relay mode, the WebSocket endpoint (e.g.,
wss://relay.protege.bot/ws) - Web search — optionally enable web search via Tavily or Perplexity
- Health check — optionally run
protege doctorto validate everything
After setup, your project directory looks like this:
protege-hq/
├── configs/
│ ├── context.json # Context assembly pipeline
│ ├── gateway.json # Gateway and transport settings
│ ├── inference.json # LLM provider and model config
│ ├── security.json # Sender access policy
│ ├── system.json # Logging, chat, scheduler settings
│ └── theme.json # Terminal UI theming
├── extensions/
│ └── extensions.json # Which tools, providers, hooks to load
├── memory/ # Per-persona runtime data (auto-created)
├── personas/ # Agent identity files (auto-created)
├── prompts/
│ └── system.md # Base system prompt for your agent
└── .secrets # API keys (git-ignored)Prefer non-interactive?
You can skip the wizard entirely:
protege setup \
--non-interactive \
--provider anthropic \
--inference-api-key sk-ant-... \
--outbound relay \
--relay-ws-url wss://relay.protege.bot/ws \
--doctorStep 3: Start the Gateway
The gateway is the process that receives emails, runs inference, and sends replies:
protege gateway startFor local development without real email delivery:
protege gateway start --devStep 4: Verify Everything Works
protege statusYou should see output confirming the gateway is running, your persona exists, and your config is valid. For a more thorough check:
protege doctordoctor validates your full configuration — config files, personas, provider keys, extension manifest — and reports any issues.
Step 5: Talk to Your Agent
You have two ways to interact with your agent:
Option A: Send an email
If you're using relay mode, your agent already has an email address (shown during persona creation). Send it an email from your regular email client. You'll get a reply back.
Option B: Use the terminal chat
The chat TUI is a development tool for testing your agent locally. In production, the whole point of Protege is that you interact over email — from your phone, your laptop, or wherever you are — not glued to a terminal.
For quick local testing:
protege chatPress Ctrl+N to start a new conversation, type a message, and press Ctrl+S to send. Your agent's reply appears in the same thread.
Step 6: Check the Logs
Watch what your agent is doing in real time:
protege logs --scope gateway --followYou'll see events for inbound messages, inference runs, tool calls, and outbound delivery.
What Just Happened?
When you sent a message, here's what Protege did behind the scenes:
- Gateway received and parsed the message, identified which persona it was addressed to
- Context pipeline assembled the system prompt, persona instructions, conversation history, and your message
- Harness sent the assembled context to your configured LLM provider
- The LLM generated a response (possibly calling tools like
web_searchorsend_emailalong the way) - Gateway delivered the response back to you as an email (or displayed it in chat)
Next Steps
- Relay vs Local SMTP — understand the two connectivity modes
- Customize your agent — add tools, change providers, write hooks
- CLI reference — full command documentation

