Loading...
BrainLayer is a local-first memory layer that gives any MCP-compatible AI agent persistent memory across conversations. It indexes Claude Code session transcripts into a SQLite database with vector embeddings, enabling semantic search, task-aware retrieval, file history tracking, and session analysis. Everything runs locally — no cloud accounts, no API keys, no Docker. The enrichment pipeline uses local LLMs (Ollama/MLX) to generate 10-field metadata per chunk including summaries, tags, importance scores, and intent classification.
Semantic vectors (bge-large-en-v1.5) + FTS5 keyword search, fused with Reciprocal Rank Fusion.
GLM-4.7-Flash generates summaries, tags, importance scores, and intent classification per chunk.
Conversation
CC sessions → JSONL
Indexing
Chunk + deduplicate
Embedding
bge-large 1024-dim
Hybrid Search
Vec + FTS5 + RRF
MCP Tools
7 tools for agents
Conversation
CC sessions → JSONL
Indexing
Chunk + deduplicate
Embedding
bge-large 1024-dim
Hybrid Search
Vec + FTS5 + RRF
MCP Tools
7 tools for agents
pip install brainlayerAI agents forget everything between sessions. Every architecture decision, debugging insight, and user preference — gone. Developers repeat themselves constantly, re-explaining context that should be remembered.
Built on SQLite + sqlite-vec: one .db file stores everything. No Docker, no database servers, no cloud accounts. Hybrid search combines semantic embeddings (bge-large-en-v1.5, 1024 dims) with FTS5 keyword search via Reciprocal Rank Fusion.
14 MCP tools organized into an Intelligence Layer (think, recall, store, sessions) and a Search Layer (search, context, file_timeline, operations, regression). Any MCP-compatible editor — Claude Code, Cursor, Zed, VS Code — gets instant memory.
Every indexed chunk gets enriched with 10 metadata fields (summary, tags, importance, intent, symbols, epistemic_level, debt_impact, and more) using local LLMs — Ollama or MLX on Apple Silicon. Zero cloud dependency.
268,000+ conversation chunks indexed from 9 projects. Session-level analysis extracts decisions, corrections, and learnings. Regression detection tracks what changed since a file last worked.
2 MCP tools (voice_speak + voice_ask), 5 voice modes, whisper.cpp STT (~300ms), edge-tts, macOS Voice Bar widget, session booking. 236 tests. bunx voicelayer-mcp.
Autonomous AI agent ecosystem — 11 packages + 4 external repos, 7 domain agents, 335K+ memory chunks, multi-LLM routing, Telegram integration. 912 tests.
3 core (search, store, recall) + 4 knowledge graph (digest, entity, update, person lookup). Consolidated from 14 to 7. Old names still work via aliases.
Entity extraction, relation mapping, person lookup, and sentiment analysis. 119 entities across people, projects, and technologies.