Skip to content

Onboarding Wizard

The prx onboard command creates your initial configuration file by walking you through provider selection, API key entry, model choice, and memory backend setup. It is the recommended way to configure PRX for the first time.

What Onboard Does

When you run prx onboard, the wizard performs the following steps:

  1. Selects an LLM provider -- Prompts you to choose from the 9 supported providers (Anthropic, OpenAI, Google Gemini, Ollama, OpenRouter, etc.)
  2. Stores your API key -- Securely writes your provider credential into the config file
  3. Fetches available models -- Queries the provider API to list models you have access to
  4. Sets a default model -- Lets you pick the model to use by default
  5. Configures memory backend -- Chooses between Markdown (file-based), SQLite, or PostgreSQL
  6. Writes the config file -- Creates ~/.config/openprx/openprx.toml with your settings

After onboarding, PRX is ready to run with prx daemon or prx chat.

Interactive Mode

The default onboarding experience is the quick setup, which asks only essential questions. For a full interactive wizard that walks through every configuration section, use the --interactive flag:

bash
prx onboard --interactive

The interactive wizard includes additional configuration for:

  • Gateway host and port settings
  • Channel pre-configuration (Telegram, Discord, etc.)
  • Security and autonomy level
  • Workspace directory
  • Observability settings

Quick Setup (Default)

The default prx onboard runs a streamlined quick setup:

bash
prx onboard

This asks for your provider, API key, and model -- nothing more. All other settings use sensible defaults.

Quick Setup with Flags

Skip the interactive prompts entirely by passing flags:

bash
prx onboard \
  --provider anthropic \
  --api-key sk-ant-api03-xxxxxxxxxxxx \
  --model claude-sonnet-4-20250514

Available flags:

FlagDescriptionExample
--providerLLM provider nameanthropic, openai, ollama, openrouter
--api-keyProvider API key or credentialsk-ant-..., sk-...
--modelDefault model identifierclaude-sonnet-4-20250514, gpt-4o
--memoryMemory backendmarkdown, sqlite, postgres
--interactiveRun the full interactive wizard(no value)
--channels-onlyRe-run only the channel repair wizard(no value)

Examples

Anthropic Claude with defaults:

bash
prx onboard --provider anthropic --api-key "$ANTHROPIC_API_KEY"

Local Ollama (no API key needed):

bash
prx onboard --provider ollama --model llama3.2

OpenRouter with a specific model:

bash
prx onboard --provider openrouter --api-key "$OPENROUTER_API_KEY" --model anthropic/claude-sonnet-4-20250514

OpenAI with SQLite memory:

bash
prx onboard --provider openai --api-key "$OPENAI_API_KEY" --model gpt-4o --memory sqlite

Config File

The onboarding wizard writes the configuration to:

~/.config/openprx/openprx.toml

On Linux, this follows the XDG Base Directory specification. On macOS, it uses ~/Library/Application Support/openprx/openprx.toml unless XDG_CONFIG_HOME is set.

Example Generated Config

After running prx onboard --provider anthropic --model claude-sonnet-4-20250514, the generated config looks like this:

toml
# OpenPRX Configuration
# Generated by: prx onboard

# ── Provider ──────────────────────────────────────────────
default_provider = "anthropic"
default_model = "claude-sonnet-4-20250514"
default_temperature = 0.7
api_key = "sk-ant-api03-xxxxxxxxxxxx"

# ── Workspace ─────────────────────────────────────────────
workspace_dir = "~/.local/share/openprx"

# ── Memory ────────────────────────────────────────────────
[memory]
backend = "markdown"
# path defaults to workspace_dir/memory

# ── Gateway ───────────────────────────────────────────────
[gateway]
host = "127.0.0.1"
port = 3120

# ── Channels ──────────────────────────────────────────────
[channels]
cli = true

# ── Security ──────────────────────────────────────────────
[security]
autonomy = "supervised"

You can edit this file at any time. PRX supports hot-reload -- most changes take effect without restarting the daemon.

Config Sections

The config file supports the following top-level sections:

SectionPurpose
default_providerLLM provider to use by default
default_modelModel to use by default
api_keyProvider API credential
[memory]Memory backend and storage settings
[gateway]HTTP/WebSocket gateway configuration
[channels]Messaging channel configurations
[channels.telegram]Telegram bot settings
[channels.discord]Discord bot settings
[security]Autonomy level, sandbox, policies
[router]LLM routing strategy
[self_system]Self-evolution pipeline settings
[observability]Metrics, tracing, logging
[cron]Scheduled task configuration
[plugins]WASM plugin paths and settings

See the Full Configuration Reference for every available option.

Post-Onboard Verification

After onboarding, run the diagnostic command to verify everything is configured correctly:

bash
prx doctor

The doctor checks:

  • Config file -- Validates TOML syntax and required fields
  • Provider connectivity -- Tests the API key by querying the provider
  • Model availability -- Confirms the selected model is accessible
  • Memory backend -- Verifies the storage backend is writable
  • System dependencies -- Checks for optional tools (git, docker, etc.)
  • Network -- Tests connectivity to configured services

Example output:

PRX Doctor

  Config file .............. OK  (~/.config/openprx/openprx.toml)
  Provider (anthropic) ..... OK  (authenticated)
  Model .................... OK  (claude-sonnet-4-20250514)
  Memory (markdown) ........ OK  (writable)
  Gateway port (3120) ...... OK  (available)
  Git ...................... OK  (2.43.0)
  Docker ................... WARN (not installed -- sandbox features limited)

All critical checks passed.

Doctor Subcommands

The doctor also has subcommands for targeted diagnostics:

bash
# Probe model catalogs across all providers
prx doctor models

# Probe models for a specific provider
prx doctor models --provider anthropic

Channel Repair Wizard

If you have already completed onboarding and want to add or fix channel configurations, use the --channels-only flag:

bash
prx onboard --channels-only

This skips provider and model setup and goes directly to channel configuration.

Re-running Onboard

You can run prx onboard again at any time. The wizard detects your existing configuration and offers to update it rather than overwriting from scratch. Your existing channel configurations, memory data, and custom settings are preserved.

Next Steps

Released under the Apache-2.0 License.