LogosDB is a fast semantic vector database written in C/C++ that provides approximate nearest-neighbor search over embedding vectors with associated text metadata.
Authors: Jose (@jose-compu)
Contributing: See CONTRIBUTING.md for build instructions, style guide, and PR workflow.
- Vectors and metadata are stored as flat binary files, memory-mapped for zero-copy reads.
- Approximate nearest-neighbor search via HNSW (hnswlib), O(log n) query time.
- Each vector row carries optional text and ISO 8601 timestamp metadata (JSONL sidecar).
- The basic operations are
Put(embedding, text, timestamp)andSearch(query, top_k). - Timestamp range filtering: search within a time window (e.g., "last 24 hours").
- Multiple distance metrics: inner product, cosine similarity (auto-normalized), or L2 Euclidean.
- Bulk vector access for direct tensor construction (e.g. loading into GPU memory).
- Thread-safe writes via internal mutex; concurrent reads are lock-free.
- Crash recovery: HNSW index is automatically backfilled from the append-only vector store on open.
- Scales to millions of vectors.
- Framework integrations: LangChain and LlamaIndex VectorStore adapters.
- MCP server: first-class Claude Code integration via
logosdb-mcp-server.
The public interface is in include/logosdb/logosdb.h. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.
Planning a deployment? See docs/sizing.md for disk/RAM estimates based on N×dim, or run python -m logosdb.sizing --rows 1_000_000 --dim 768.
Guide to header files:
- include/logosdb/logosdb.h: Main interface to the DB. Start here. Contains:
- C API with opaque handles and
errptrconvention (RocksDB/LevelDB style) - C++ convenience wrapper (
logosdb::DB) with RAII and exceptions logosdb::Optionsfor HNSW tuning and distance metric selectionlogosdb::SearchHitresult structlogosdb_search_ts_range()for timestamp-filtered search
- C API with opaque handles and
- This is not a general-purpose vector database. It is purpose-built for embedding-based memory retrieval in LLM inference (funes.cpp).
- Only a single process (possibly multi-threaded) can access a particular database at a time.
- There is no client-server support built into the library. An application that needs such support will have to wrap their own server around the library.
- For inner-product distance (
LOGOSDB_DIST_IP, the default), vectors must be L2-normalized before insertion. UseLOGOSDB_DIST_COSINEfor automatic normalization. - Embedding generation is external — the caller provides pre-computed float vectors.
git clone --recurse-submodules <repository-url>
cd logosdbThis project supports CMake out of the box.
Quick start:
mkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && cmake --build .This builds:
| Target | Description |
|---|---|
logosdb |
Static library (liblogosdb.a) |
logosdb-cli |
Command-line tool for put, search, info |
logosdb-bench |
Benchmark: HNSW vs brute-force, with ChromaDB comparison |
logosdb-test |
Unit tests |
#include <logosdb/logosdb.h>
char *err = NULL;
logosdb_options_t *opts = logosdb_options_create();
logosdb_options_set_dim(opts, 2048);
logosdb_t *db = logosdb_open("/tmp/mydb", opts, &err);
logosdb_options_destroy(opts);
float vec[2048] = { /* ... unnormalized vector ... */ };
// L2-normalize for inner-product distance (returns 0 on success, -1 if zero norm)
if (logosdb_l2_normalize(vec, 2048) == 0) {
logosdb_put(db, vec, 2048, "My commute is 42 minutes",
"2025-06-25T10:00:00Z", &err);
}
logosdb_search_result_t *res = logosdb_search(db, query_vec, 2048, 5, &err);
for (int i = 0; i < logosdb_result_count(res); i++) {
printf("#%d score=%.4f text=%s\n", i,
logosdb_result_score(res, i),
logosdb_result_text(res, i));
}
logosdb_result_free(res);
logosdb_close(db);#include <logosdb/logosdb.h>
char *err = NULL;
logosdb_options_t *opts = logosdb_options_create();
logosdb_options_set_dim(opts, 2048);
logosdb_t *db = logosdb_open("/tmp/mydb", opts, &err);
// Search for top-5 matches within the last 24 hours
logosdb_search_result_t *res = logosdb_search_ts_range(
db, query_vec, 2048, 5,
"2025-04-21T10:00:00Z", // from (inclusive), NULL for no lower bound
"2025-04-22T10:00:00Z", // to (inclusive), NULL for no upper bound
50, // candidate_k: internal fetch multiplier (10x top_k recommended)
&err);
for (int i = 0; i < logosdb_result_count(res); i++) {
printf("#%d score=%.4f ts=%s text=%s\n", i,
logosdb_result_score(res, i),
logosdb_result_timestamp(res, i),
logosdb_result_text(res, i));
}
logosdb_result_free(res);
logosdb_close(db);#include <logosdb/logosdb.h>
char *err = NULL;
logosdb_options_t *opts = logosdb_options_create();
logosdb_options_set_dim(opts, 2048);
// Use cosine similarity (automatically normalizes vectors)
logosdb_options_set_distance(opts, LOGOSDB_DIST_COSINE);
// Or use L2 Euclidean distance
// logosdb_options_set_distance(opts, LOGOSDB_DIST_L2);
// Default is LOGOSDB_DIST_IP (inner product on L2-normalized vectors)
logosdb_t *db = logosdb_open("/tmp/mydb", opts, &err);
logosdb_options_destroy(opts);
// For cosine: vectors are automatically normalized on put/search
float vec[2048] = { /* ... unnormalized vector ... */ };
logosdb_put(db, vec, 2048, "entry", "2025-04-22T10:00:00Z", &err);
logosdb_close(db);#include <logosdb/logosdb.h>
#include <vector>
// Basic usage with default inner-product distance
logosdb::DB db("/tmp/mydb", {.dim = 2048});
// L2-normalize your vectors before insertion (required for inner-product distance)
std::vector<float> embedding = load_some_vector(); // unnormalized
if (logosdb::l2_normalize(embedding)) {
db.put(embedding, "My commute is 42 minutes", "2025-06-25T10:00:00Z");
}
// Or use l2_normalized() to get a normalized copy
auto normalized = logosdb::l2_normalized(query);
auto results = db.search(normalized, 5);
for (auto &r : results) {
printf("id=%llu score=%.4f text=%s\n", r.id, r.score, r.text.c_str());
}#include <logosdb/logosdb.h>
logosdb::DB db("/tmp/mydb", {.dim = 2048});
// Search within a time window
auto results = db.search_ts_range(
query, 5,
"2025-04-21T00:00:00Z", // from timestamp
"2025-04-22T00:00:00Z", // to timestamp
50); // candidate_k (optional, defaults to 10x top_k)
for (auto &r : results) {
printf("id=%llu score=%.4f ts=%s\n", r.id, r.score, r.timestamp.c_str());
}#include <logosdb/logosdb.h>
// Cosine similarity - vectors are automatically normalized
logosdb::DB db("/tmp/mydb", {.dim = 2048, .distance = LOGOSDB_DIST_COSINE});
// Put unnormalized vectors - they will be normalized automatically
db.put(unnormalized_embedding, "entry", "2025-04-22T10:00:00Z");
auto results = db.search(query, 5);
// scores are cosine similarities in [0, 1]
// L2 Euclidean distance
// logosdb::DB db("/tmp/mydb", {.dim = 2048, .distance = LOGOSDB_DIST_L2});LogosDB ships Python bindings built with pybind11 and scikit-build-core.
Install from PyPI (binary wheels provided for Linux x86_64/aarch64 and macOS x86_64/arm64 on CPython 3.9–3.13):
pip install logosdbOr build from source in a clone:
pip install .Usage:
import logosdb
from sentence_transformers import SentenceTransformer
# Local Hugging Face embeddings model (runs on your machine)
model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
dim = model.get_sentence_embedding_dimension()
# Use cosine distance so LogosDB auto-normalizes vectors
db = logosdb.DB("/tmp/agent_memory", dim=dim, distance=logosdb.DIST_COSINE)
# Three learnings captured by an AI agent
learnings = [
("Retrying API calls with exponential backoff reduced transient failures by 42%.", "2026-05-06T09:00:00Z"),
("Splitting long tasks into smaller batches improved throughput and lowered memory spikes.", "2026-05-06T09:05:00Z"),
("Adding idempotency keys prevented duplicate writes during network retries.", "2026-05-06T09:10:00Z"),
]
for text, ts in learnings:
emb = model.encode(text).astype("float32")
db.put(emb, text=text, timestamp=ts)
# Ask a natural-language question
question = "How can we avoid duplicate writes when retries happen?"
q_emb = model.encode(question).astype("float32")
hits = db.search(q_emb, top_k=3)
for h in hits:
print(f"{h.score:.4f} {h.text}")import numpy as np
import logosdb
# With cosine distance, vectors are automatically normalized
db = logosdb.DB("/tmp/mydb", dim=128, distance=logosdb.DIST_COSINE)
# No need to normalize - just put raw vectors
v = np.random.randn(128).astype(np.float32)
rid = db.put(v, text="unnormalized vector", timestamp="2025-04-22T10:00:00Z")
# Search also works with unnormalized queries
query = np.random.randn(128).astype(np.float32)
hits = db.search(query, top_k=5)Run the Python tests and examples:
pip install ".[test]"
pytest tests/python/
python examples/python/basic_usage.py
# sentence-transformers demo (optional heavy dep)
pip install ".[examples]"
python examples/python/sentence_transformers_demo.pyLogosDB is designed for memory-efficient retrieval-augmented generation (RAG) that runs entirely on your hardware.
LogosDB uses mmap() for zero-copy access. Your RAM usage scales with query patterns, not dataset size:
| Dataset | Dim | Disk | Typical Query RAM |
|---|---|---|---|
| 100K | 384 | 153 MB | <20 MB |
| 1M | 384 | 1.5 GB | <100 MB |
| 10M | 384 | 15 GB | <200 MB |
The OS caches hot index pages; cold data stays on disk. No explicit loading/unloading needed.
import numpy as np
import logosdb
from sentence_transformers import SentenceTransformer
# 1. Load model (runs locally)
model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
dim = model.get_sentence_embedding_dimension()
# 2. Create DB with cosine distance (auto-normalizes)
db = logosdb.DB("/data/knowledge", dim=dim, distance=logosdb.DIST_COSINE)
# 3. Index documents
for text in documents:
emb = model.encode(text)
db.put(emb, text=text) # Auto-normalized with cosine distance
# 4. Query (only touched pages load into RAM)
query_emb = model.encode("What is HNSW?")
for hit in db.search(query_emb, top_k=3):
print(f"{hit.score:.4f} {hit.text}")See docs/rag-on-prem.md for complete guide including:
- Time-sharding for infinite retention
- External quantization patterns
- Architecture patterns for production
See docs/sizing.md for detailed disk/RAM formulas and the python -m logosdb.sizing calculator.
Run the memory-efficient RAG example:
pip install ".[examples]"
python examples/python/memory_efficient_rag.pypip install 'logosdb[llama-index]'from logosdb import LogosDBIndex
from llama_index.core import Document
from llama_index.core.schema import TextNode
from llama_index.core.vector_stores import VectorStoreQuery
import numpy as np
# Create the vector store
db = LogosDBIndex(uri="/tmp/mydb", dim=128)
# Add nodes with pre-computed embeddings
node = TextNode(
text="My commute is 42 minutes",
embedding=np.random.randn(128).astype(np.float32).tolist(),
metadata={"timestamp": "2025-04-28T10:00:00Z"}
)
db.add([node])
# Query
query_emb = np.random.randn(128).astype(np.float32).tolist()
query = VectorStoreQuery(query_embedding=query_emb, similarity_top_k=5)
results = db.query(query)
for node, score in zip(results.nodes, results.similarities):
print(f"Score: {score:.4f}, Text: {node.text}")
# Timestamp range filtering
results = db.query(query, ts_from="2025-04-01T00:00:00Z", ts_to="2025-04-30T23:59:59Z")The LogosDBIndex class implements LlamaIndex's VectorStore interface, supporting:
add(nodes)- Add nodes with embeddingsdelete(node_id)- Delete by node IDquery(VectorStoreQuery)- Similarity search by vectorcount()/len(store)- Number of live documents- Timestamp filtering via
ts_fromandts_tokwargs
# Database info
logosdb-cli info /tmp/mydb
# Search with a binary query vector file
logosdb-cli search /tmp/mydb --query-file q.bin --top-k 5logosdb-mcp-server is a Model Context Protocol server that
exposes LogosDB to Claude Code (and any other MCP client) over stdio. It lets Claude index
files, persist knowledge across sessions, and do semantic search without leaving the conversation.
1. Add to .claude/mcp.json in your project (or to ~/.claude.json for global use):
{
"mcpServers": {
"logosdb": {
"command": "npx",
"args": ["-y", "logosdb-mcp-server"],
"env": {
"LOGOSDB_PATH": "./.logosdb"
}
}
}
}By default the MCP server uses local Transformers.js embeddings (no API keys). Add EMBEDDING_PROVIDER / keys only if you want cloud or Ollama — see mcp/README.md.
Google Antigravity: same stdio + npx setup; step-by-step is in mcp/README.md — Google Antigravity.
2. Start Claude Code — the server is launched automatically on first tool call.
3. Use it in conversation:
> Index the src/ directory into a "code" namespace
> Find where JWT tokens are validated
> Remember that we decided to use UUIDs for all primary keys
| Variable | Default | Description |
|---|---|---|
LOGOSDB_PATH |
./.logosdb |
Root directory for all namespace databases |
EMBEDDING_PROVIDER |
(local) | Omit for Transformers.js on-device; or ollama, openai, voyage |
TRANSFORMERS_MODEL |
Xenova/all-MiniLM-L6-v2 |
Local embedding model (bundled MCP path) |
OLLAMA_* |
— | See mcp/README.md when using Ollama |
OPENAI_API_KEY |
— | Required when EMBEDDING_PROVIDER=openai |
VOYAGE_API_KEY |
— | Required when EMBEDDING_PROVIDER=voyage |
LOGOSDB_CHUNK_SIZE |
800 |
Target characters per chunk for file indexing |
Voyage AI (voyage-3, dim=1024) is Anthropic's recommended cloud embedding model:
"env": {
"LOGOSDB_PATH": "./.logosdb",
"EMBEDDING_PROVIDER": "voyage",
"VOYAGE_API_KEY": "<your-voyage-api-key>"
}| Tool | Description |
|---|---|
logosdb_index |
Embed and store a text snippet in a namespace |
logosdb_index_file |
Chunk, embed, and store an entire file |
logosdb_search |
Semantic search across a namespace |
logosdb_list |
List all namespaces |
logosdb_info |
Stats for a namespace (count, dimension, path) |
logosdb_delete |
Delete an entry by row ID |
npm install -g logosdb-mcp-serverThen replace the command/args in mcp.json with:
"command": "logosdb-mcp-server",
"args": []Here is a performance report from the included logosdb-bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.
We use databases with 1K, 10K, and 100K vectors. Each vector has 2048 dimensions (matching typical LLM embedding sizes). Vectors are L2-normalized random unit vectors.
LogosDB: version 0.5.0
CPU: Apple M-series (ARM64)
Dim: 2048
HNSW M: 16, ef_construction: 200, ef_search: 50
put (1K vectors): ~50 µs/op (~20,000 inserts/sec)
put (10K vectors): ~80 µs/op (~12,500 inserts/sec)
put (100K vectors): ~120 µs/op (~8,300 inserts/sec)
Each "op" above corresponds to a write of a single vector + metadata + HNSW index update.
HNSW top-5 (1K): ~0.1 ms/query
HNSW top-5 (10K): ~0.3 ms/query
HNSW top-5 (100K): ~1.2 ms/query
Brute-force top-5 (1K): ~0.3 ms/query
Brute-force top-5 (10K): ~2.5 ms/query
Brute-force top-5 (100K): ~25 ms/query
HNSW maintains sub-linear scaling while brute-force grows linearly with database size. At 100K vectors, HNSW is roughly 20x faster.
logosdb-bench --dim 2048 --counts 1000,10000,100000| Metric | ChromaDB | LogosDB |
|---|---|---|
| Language | Python + C (hnswlib) | Pure C/C++ |
| Search algorithm | HNSW | HNSW (same hnswlib) |
| Storage | SQLite + Parquet | Binary mmap + JSONL |
| Startup overhead | Python runtime + deps | Zero (linked library) |
| Embedding generation | Built-in (Sentence Transformers) | External (caller provides vectors) |
| Target use case | General-purpose vector store | Embedded LLM inference memory |
| Search latency (100K, dim=2048) | ~5-10 ms | ~1-3 ms |
| Memory footprint (100K, dim=2048) | ~1.5 GB (Python + SQLite) | ~800 MB (mmap) |
| Cold start | ~2-5 s (Python imports) | <10 ms |
| Dependencies | Python, NumPy, SQLite, hnswlib | hnswlib (header-only, vendored) |
LogosDB uses the same HNSW implementation as ChromaDB (hnswlib) but eliminates Python overhead, SQLite serialization, and Sentence Transformer coupling. The result is a leaner library optimized for the single use case of embedded semantic memory for LLM inference.
include/logosdb/logosdb.h Public C/C++ API (start here)
src/logosdb.cpp Core engine: wires storage + index + metadata
src/storage.h / storage.cpp Fixed-stride binary vector file with mmap
src/metadata.h / metadata.cpp Append-only JSONL text + timestamp store
src/hnsw_index.h / .cpp Thin wrapper around hnswlib
tools/logosdb-cli.cpp Command-line interface
tools/logosdb-bench.cpp Benchmark tool
tests/test_basic.cpp C++ unit tests
tests/python/test_smoke.py Python smoke tests (pytest)
python/src/bindings.cpp pybind11 Python bindings
python/logosdb/ Python package (logosdb._core + stubs)
examples/python/ Python usage examples
pyproject.toml Python build/config (scikit-build-core)
third_party/hnswlib/ Vendored hnswlib (header-only)
mcp/ MCP server (logosdb-mcp-server npm package)
.claude/mcp.json Example Claude Code MCP configuration
CHANGELOG Release history
LICENSE MIT license text
We welcome contributions. See CONTRIBUTING.md for:
- Building from source
- Running tests and benchmarks
- Code style and PR workflow
Please review our Code of Conduct and Security Policy.
MIT — see LICENSE for the full text.