Skip to content
Cette page a été générée et traduite avec l'aide de l'IA. Si vous remarquez des inexactitudes, n'hésitez pas à contribuer. Modifier sur GitHub

Embeddings Memory Backend

The embeddings backend stocke memories as vector embeddings, enabling semantic similarity search. This est le plus powerful recall mechanism, allowing agents to find contextually relevant memories even when exact keywords ne faites pas match.

Apercu

The embeddings backend:

  • Converts memory text into dense vector representations
  • Stores vectors in a local or remote vector database
  • Retrieves memories by cosine similarity vers le current query
  • Supports multiple embedding fournisseurs (Ollama, OpenAI, etc.)

Fonctionnement

  1. When a memory is stored, its text est envoye a an embedding model
  2. Le resultating vector is stored alongside the original text
  3. During recall, the current context is embedded and compared against stored vectors
  4. The top-K most similar memories are retournes

Configuration

toml
[memory]
backend = "embeddings"

[memory.embeddings]
provider = "ollama"
model = "nomic-embed-text"
dimension = 768
top_k = 10
similarity_threshold = 0.5

[memory.embeddings.store]
type = "sqlite-vec"  # or "pgvector"
path = "~/.local/share/openprx/embeddings.db"

Supported Embedding Providers

ProviderModelDimensions
Ollamanomic-embed-text768
OpenAItext-embedding-3-small1536
OpenAItext-embedding-3-large3072

Voir aussi Pages

Released under the Apache-2.0 License.