Embeddings Memory Backend
The embeddings backend stocke memories as vector embeddings, enabling semantic similarity search. This est le plus powerful recall mechanism, allowing agents to find contextually relevant memories even when exact keywords ne faites pas match.
Apercu
The embeddings backend:
- Converts memory text into dense vector representations
- Stores vectors in a local or remote vector database
- Retrieves memories by cosine similarity vers le current query
- Supports multiple embedding fournisseurs (Ollama, OpenAI, etc.)
Fonctionnement
- When a memory is stored, its text est envoye a an embedding model
- Le resultating vector is stored alongside the original text
- During recall, the current context is embedded and compared against stored vectors
- The top-K most similar memories are retournes
Configuration
toml
[memory]
backend = "embeddings"
[memory.embeddings]
provider = "ollama"
model = "nomic-embed-text"
dimension = 768
top_k = 10
similarity_threshold = 0.5
[memory.embeddings.store]
type = "sqlite-vec" # or "pgvector"
path = "~/.local/share/openprx/embeddings.db"1
2
3
4
5
6
7
8
9
10
11
12
13
2
3
4
5
6
7
8
9
10
11
12
13
Supported Embedding Providers
| Provider | Model | Dimensions |
|---|---|---|
| Ollama | nomic-embed-text | 768 |
| OpenAI | text-embedding-3-small | 1536 |
| OpenAI | text-embedding-3-large | 3072 |