Skip to content
This page was generated and translated with the assistance of AI. If you spot any inaccuracies, feel free to help improve it. Edit on GitHub

Configuration Reference

This page documents every configuration section and field in PRX's config.toml. Fields marked with a default value can be omitted -- PRX will use the default.

Top-level (Default Settings)

These fields appear at the root level of config.toml, outside any section header.

FieldTypeDefaultDescription
default_providerstring"openrouter"Provider ID or alias (e.g., "anthropic", "openai", "ollama")
default_modelstring"anthropic/claude-sonnet-4.6"Model identifier routed through the selected provider
default_temperaturefloat0.7Sampling temperature (0.0--2.0). Lower = more deterministic
api_keystring?nullAPI key for the selected provider. Overridden by provider-specific env vars
api_urlstring?nullBase URL override for the provider API (e.g., remote Ollama endpoint)
toml
default_provider = "anthropic"
default_model = "anthropic/claude-sonnet-4-6"
default_temperature = 0.7
api_key = "sk-ant-..."

[gateway]

HTTP gateway server for webhook endpoints, pairing, and the web API.

FieldTypeDefaultDescription
hoststring"127.0.0.1"Bind address. Use "0.0.0.0" for public access
portu1616830Listen port
require_pairingbooltrueRequire device pairing before accepting API requests
allow_public_bindboolfalseAllow binding to non-localhost without a tunnel
pair_rate_limit_per_minuteu325Max pairing requests per minute per client
webhook_rate_limit_per_minuteu3260Max webhook requests per minute per client
api_rate_limit_per_minuteu32120Max API requests per minute per authenticated token
trust_forwarded_headersboolfalseTrust X-Forwarded-For / X-Real-IP headers (enable behind reverse proxy only)
request_timeout_secsu64300HTTP handler timeout in seconds
idempotency_ttl_secsu64300TTL for webhook idempotency keys
toml
[gateway]
host = "127.0.0.1"
port = 16830
require_pairing = true
api_rate_limit_per_minute = 120

WARNING

Changing host or port requires a full restart. These values are bound at server startup and cannot be hot-reloaded.

[channels_config]

Top-level channel configuration. Individual channels are nested subsections.

FieldTypeDefaultDescription
clibooltrueEnable the interactive CLI channel
message_timeout_secsu64300Per-message processing timeout (LLM + tools)

[channels_config.telegram]

FieldTypeDefaultDescription
bot_tokenstring(required)Telegram Bot API token from @BotFather
allowed_usersstring[][]Allowed Telegram user IDs or usernames. Empty = deny all
mention_onlyboolfalseIn groups, only respond to messages that @-mention the bot
stream_mode"off" | "partial""off"Streaming mode: off sends complete response, partial edits a draft progressively
draft_update_interval_msu641000Minimum interval between draft edits (rate limit protection)
interrupt_on_new_messageboolfalseCancel in-flight response when the same user sends a new message
toml
[channels_config.telegram]
bot_token = "123456:ABC-DEF..."
allowed_users = ["alice", "bob"]
mention_only = true
stream_mode = "partial"

[channels_config.discord]

FieldTypeDefaultDescription
bot_tokenstring(required)Discord bot token from Developer Portal
guild_idstring?nullRestrict to a single guild (server)
allowed_usersstring[][]Allowed Discord user IDs. Empty = deny all
listen_to_botsboolfalseProcess messages from other bots (own messages always ignored)
mention_onlyboolfalseOnly respond to @-mentions
toml
[channels_config.discord]
bot_token = "MTIz..."
guild_id = "987654321"
allowed_users = ["111222333"]
mention_only = true

[channels_config.slack]

FieldTypeDefaultDescription
bot_tokenstring(required)Slack bot OAuth token (xoxb-...)
app_tokenstring?nullApp-level token for Socket Mode (xapp-...)
channel_idstring?nullRestrict to a single channel
allowed_usersstring[][]Allowed Slack user IDs. Empty = deny all
mention_onlyboolfalseOnly respond to @-mentions in groups

[channels_config.lark]

FieldTypeDefaultDescription
app_idstring(required)Lark/Feishu App ID
app_secretstring(required)Lark/Feishu App Secret
encrypt_keystring?nullEvent encryption key
verification_tokenstring?nullEvent verification token
allowed_usersstring[][]Allowed user IDs. Empty = deny all
use_feishuboolfalseUse Feishu (China) API endpoints instead of Lark (international)
receive_mode"websocket" | "webhook""websocket"Message receive mode
portu16?nullWebhook listen port (only for webhook mode)
mention_onlyboolfalseOnly respond to @-mentions

PRX also supports these additional channels (configured under [channels_config.*]):

  • Matrix -- homeserver, access_token, room allowlists
  • Signal -- via signal-cli REST API
  • WhatsApp -- Cloud API or Web mode
  • iMessage -- macOS only, contact allowlists
  • DingTalk -- Stream Mode with client_id / client_secret
  • QQ -- Official Bot SDK with app_id / app_secret
  • Email -- IMAP/SMTP
  • IRC -- Server, channel, nick
  • Mattermost -- URL + bot token
  • Nextcloud Talk -- base URL + app token
  • Webhook -- Generic inbound webhooks

[memory]

Memory backend for conversation history, knowledge, and embeddings.

FieldTypeDefaultDescription
backendstring"sqlite"Backend type: "sqlite", "lucid", "postgres", "markdown", "none"
auto_savebooltrueAutomatically save user conversation input to memory
acl_enabledboolfalseEnable memory access control lists
hygiene_enabledbooltrueRun periodic archiving and retention cleanup
archive_after_daysu327Archive daily/session files older than this
purge_after_daysu3230Purge archived files older than this
conversation_retention_daysu323SQLite: prune conversation rows older than this
daily_retention_daysu327SQLite: prune daily rows older than this
embedding_providerstring"none"Embedding provider: "none", "openai", "custom:<URL>"
embedding_modelstring"text-embedding-3-small"Embedding model name
embedding_dimensionsusize1536Embedding vector dimensions
vector_weightf640.7Weight for vector similarity in hybrid search (0.0--1.0)
keyword_weightf640.3Weight for BM25 keyword search (0.0--1.0)
min_relevance_scoref640.4Minimum hybrid score to include memory in context
embedding_cache_sizeusize10000Max embedding cache entries before LRU eviction
snapshot_enabledboolfalseExport core memories to MEMORY_SNAPSHOT.md
snapshot_on_hygieneboolfalseRun snapshot during hygiene passes
auto_hydratebooltrueAuto-load from snapshot when brain.db is missing
toml
[memory]
backend = "sqlite"
auto_save = true
embedding_provider = "openai"
embedding_model = "text-embedding-3-small"
embedding_dimensions = 1536
vector_weight = 0.7
keyword_weight = 0.3

[router]

Heuristic LLM router for multi-model deployments. Scores candidate models using a weighted formula combining capability, Elo rating, cost, and latency.

FieldTypeDefaultDescription
enabledboolfalseEnable heuristic routing
alphaf320.0Similarity score weight
betaf320.5Capability score weight
gammaf320.3Elo score weight
deltaf320.1Cost penalty coefficient
epsilonf320.1Latency penalty coefficient
knn_enabledboolfalseEnable KNN semantic routing from history
knn_min_recordsusize10Minimum history records before KNN affects routing
knn_kusize7Number of nearest neighbors for voting

[router.automix]

Adaptive escalation policy: start with a cheap model, escalate to premium when confidence drops.

FieldTypeDefaultDescription
enabledboolfalseEnable Automix escalation
confidence_thresholdf320.7Escalate when confidence falls below this (0.0--1.0)
cheap_model_tiersstring[][]Model tiers considered "cheap-first"
premium_model_idstring""Model used for escalation
toml
[router]
enabled = true
beta = 0.5
gamma = 0.3
knn_enabled = true

[router.automix]
enabled = true
confidence_threshold = 0.7
premium_model_id = "anthropic/claude-sonnet-4-6"

[security]

OS-level security: sandboxing, resource limits, and audit logging.

[security.sandbox]

FieldTypeDefaultDescription
enabledbool?null (auto-detect)Enable sandbox isolation
backendstring"auto"Backend: "auto", "landlock", "firejail", "bubblewrap", "docker", "none"
firejail_argsstring[][]Custom Firejail arguments

[security.resources]

FieldTypeDefaultDescription
max_memory_mbu32512Maximum memory per command (MB)
max_cpu_time_secondsu6460Maximum CPU time per command
max_subprocessesu3210Maximum number of subprocesses
memory_monitoringbooltrueEnable memory usage monitoring

[security.audit]

FieldTypeDefaultDescription
enabledbooltrueEnable audit logging
log_pathstring"audit.log"Path to audit log file (relative to config dir)
max_size_mbu32100Maximum log size before rotation
sign_eventsboolfalseSign events with HMAC for tamper evidence
toml
[security.sandbox]
backend = "landlock"

[security.resources]
max_memory_mb = 1024
max_cpu_time_seconds = 120

[security.audit]
enabled = true
sign_events = true

[observability]

Metrics and distributed tracing backend.

FieldTypeDefaultDescription
backendstring"none"Backend: "none", "log", "prometheus", "otel"
otel_endpointstring?nullOTLP endpoint URL (e.g., "http://localhost:4318")
otel_service_namestring?nullService name for OTel collector (defaults to "prx")
toml
[observability]
backend = "otel"
otel_endpoint = "http://localhost:4318"
otel_service_name = "prx-production"

[mcp]

Model Context Protocol server integration. PRX acts as an MCP client, connecting to external MCP servers for additional tools.

FieldTypeDefaultDescription
enabledboolfalseEnable MCP client integration

[mcp.servers.<name>]

Each named server is a subsection under [mcp.servers].

FieldTypeDefaultDescription
enabledbooltruePer-server enable switch
transport"stdio" | "http""stdio"Transport type
commandstring?nullCommand for stdio mode
argsstring[][]Command arguments for stdio mode
urlstring?nullURL for HTTP transport
envmap<string, string>{}Environment variables for stdio mode
startup_timeout_msu6410000Startup timeout
request_timeout_msu6430000Per-request timeout
tool_name_prefixstring"mcp"Prefix for exposed tool names
allow_toolsstring[][]Tool allowlist (empty = all)
deny_toolsstring[][]Tool denylist
toml
[mcp]
enabled = true

[mcp.servers.filesystem]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/docs"]

[mcp.servers.remote-api]
transport = "http"
url = "http://localhost:8090/mcp"
request_timeout_ms = 60000

[browser]

Browser automation tool configuration.

FieldTypeDefaultDescription
enabledboolfalseEnable the browser_open tool
allowed_domainsstring[][]Allowed domains (exact or subdomain match)
session_namestring?nullNamed browser session for automation
toml
[browser]
enabled = true
allowed_domains = ["docs.rs", "github.com", "*.example.com"]

Web search and URL fetch tool configuration.

FieldTypeDefaultDescription
enabledboolfalseEnable the web_search tool
providerstring"duckduckgo"Search provider: "duckduckgo" (free) or "brave" (API key required)
brave_api_keystring?nullBrave Search API key
max_resultsusize5Maximum results per search (1--10)
timeout_secsu6415Request timeout
fetch_enabledbooltrueEnable the web_fetch tool
fetch_max_charsusize10000Max characters returned by web_fetch
toml
[web_search]
enabled = true
provider = "brave"
brave_api_key = "BSA..."
max_results = 5
fetch_enabled = true

[xin]

Xin (heart/mind) autonomous task engine -- schedules and executes background tasks including evolution, fitness checks, and hygiene operations.

FieldTypeDefaultDescription
enabledboolfalseEnable the Xin task engine
interval_minutesu325Tick interval in minutes (minimum 1)
max_concurrentusize4Maximum concurrent task executions per tick
max_tasksusize128Maximum total tasks in the store
stale_timeout_minutesu3260Minutes before a running task is marked stale
builtin_tasksbooltrueAuto-register built-in system tasks
evolution_integrationboolfalseLet Xin manage evolution/fitness scheduling
toml
[xin]
enabled = true
interval_minutes = 10
max_concurrent = 4
builtin_tasks = true
evolution_integration = true

[cost]

Spending limits and per-model pricing for cost tracking.

FieldTypeDefaultDescription
enabledboolfalseEnable cost tracking
daily_limit_usdf6410.0Daily spending limit in USD
monthly_limit_usdf64100.0Monthly spending limit in USD
warn_at_percentu880Warn when spending reaches this percentage of limit
allow_overrideboolfalseAllow requests to exceed budget with --override flag
toml
[cost]
enabled = true
daily_limit_usd = 25.0
monthly_limit_usd = 500.0
warn_at_percent = 80

[reliability]

Retry and fallback chain configuration for resilient provider access.

FieldTypeDefaultDescription
max_retriesu323Maximum retry attempts for transient failures
fallback_providersstring[][]Ordered list of fallback provider names
toml
[reliability]
max_retries = 3
fallback_providers = ["openai", "gemini"]

[secrets]

Encrypted credential store using ChaCha20-Poly1305.

FieldTypeDefaultDescription
encryptbooltrueEnable encryption for API keys and tokens in config

[auth]

External credential import settings.

FieldTypeDefaultDescription
codex_auth_json_auto_importbooltrueAuto-import OAuth credentials from Codex CLI auth.json
codex_auth_json_pathstring"~/.codex/auth.json"Path to Codex CLI auth file

[proxy]

Outbound HTTP/HTTPS/SOCKS5 proxy configuration.

FieldTypeDefaultDescription
enabledboolfalseEnable proxy
http_proxystring?nullHTTP proxy URL
https_proxystring?nullHTTPS proxy URL
all_proxystring?nullFallback proxy for all schemes
no_proxystring[][]Bypass list (same format as NO_PROXY)
scopestring"zeroclaw"Scope: "environment", "zeroclaw", "services"
servicesstring[][]Service selectors when scope is "services"
toml
[proxy]
enabled = true
https_proxy = "socks5://127.0.0.1:1080"
no_proxy = ["localhost", "127.0.0.1", "*.internal"]
scope = "zeroclaw"

Released under the Apache-2.0 License.