- Python 99.6%
- Dockerfile 0.4%
|
Some checks failed
Commit Summary / summarize (push) Failing after 32s
BRN-27: Add group_id parameter + ownership verification to delete_entity_edge, delete_episode, and get_entity_edge. Adds effective_group_id + group_id match check before destructive/read operations, matching the pattern already used by archive_fact, update_fact, and classify_fact. |
||
|---|---|---|
| .forgejo/workflows | ||
| .gitea/issue_template | ||
| compose | ||
| config | ||
| docs | ||
| graphiti-api | ||
| knowledge-mcp | ||
| reasoning-tool | ||
| services/brain-sync | ||
| src | ||
| .env.example | ||
| .gitignore | ||
| CLAUDE.md | ||
| docker-compose.yml | ||
| LICENSE | ||
| README.md | ||
g1-brain 🧬
The memory and intelligence layer — graph memory, 4-stream knowledge retrieval, and structured reasoning for the Generate One platform.
✨ Overview
g1-brain provides the platform's memory and intelligence layer. It combines graph-based persistent memory (Graphiti + Neo4j) for context across sessions, vector search (Qdrant) with 4-stream Reciprocal Rank Fusion for document retrieval, and structured reasoning tools for multi-step analysis. This is what gives the Generate One platform long-term memory, deep knowledge retrieval, and the ability to reason over facts over time.
Two compose stacks are managed from this repo: memory-stack (Neo4j, Graphiti API, Redis, FalkorDB) and knowledge-mcp (knowledge-mcp server, async worker, Valkey queue).
🏗️ Architecture
graph TD
subgraph "Memory Stack (z0ww84wkwwss8sw4kgsw88gc)"
Neo4j["Neo4j 5.26\n(graph DB, port 7474/7687)"]
GA["Graphiti API\n:8000 → memory.generate.one"]
Redis["Redis 7-alpine\n(Graphiti cache, 256MB LRU)"]
FalkorDB["FalkorDB\n(graph query engine)"]
GA --> Neo4j
GA --> Redis
end
subgraph "Knowledge Stack (kp3basi7wsdztrq1fgm7t543)"
KM["knowledge-mcp\n:8000"]
KW["knowledge-worker\n(async ingest pipeline)"]
KV["Valkey 9.0.1\n(task queue)"]
KW --> KV
KM --> KV
end
subgraph "MCP Tools"
GMCP["graphiti-mcp"]
RMCP["reasoning-tools"]
LMCP["logic-lm"]
CMCP["cms-tools\n(Directus)"]
end
GMCP --> GA
KM -.-> Qdrant["g1-llm / Qdrant"]
KW -.-> Qdrant
KW -.-> LiteLLM["g1-llm / LiteLLM"]
Client["MCP Clients"] --> GMCP
Client --> KM
Client --> RMCP
Client --> LMCP
Client --> CMCP
📦 Services
Memory Stack (compose/memory-stack.yml)
| Service | Image | Port | Description |
|---|---|---|---|
| graphiti-api | Custom build | 8000 | REST API for Graphiti graph operations → memory.generate.one |
| neo4j | neo4j:5.26-community |
7474, 7687 | Graph database for entity/relationship storage |
| redis | redis:7-alpine |
6379 | Graphiti internal cache (256 MB LRU) |
| falkordb | falkordb/falkordb:latest |
— | Graph query engine (Cypher-compatible) |
Knowledge Stack (docker-compose.yml — service kp3basi7wsdztrq1fgm7t543)
| Service | Image | Port | Description |
|---|---|---|---|
| knowledge-mcp | git.generate.one/generate-one/knowledge-mcp:latest |
8000 | MCP server with 7 search/ingest tools |
| worker | git.generate.one/generate-one/knowledge-worker:latest |
— | Document ingest pipeline (OCR, chunking, embedding) |
| knowledge-valkey | valkey/valkey:9.0.1-alpine |
6379 | Task queue for async ingestion |
🔍 Knowledge Search: 4-Stream RRF
Queries are processed through 4 parallel retrieval streams, fused via Reciprocal Rank Fusion:
- Dense vectors — Semantic similarity via Qdrant
- BM25 sparse vectors — Keyword matching with IDF weighting
- Graphiti — Entity/relationship graph traversal
- Reranker — Cross-encoder rescoring (mxbai-rerank-large-v2)
Post-fusion temporal decay (30-day half-life) prioritizes recent information.
🔧 Configuration
| Variable | Description | Default |
|---|---|---|
NEO4J_AUTH |
Neo4j authentication (neo4j/password) | — |
GRAPHITI_API_KEY |
API key for Graphiti REST API | — |
LITELLM_API_KEY |
LiteLLM key for LLM operations | — |
VLM_OCR_ENABLED |
Enable VLM-based OCR for scanned PDFs | true |
VLM_OCR_MODEL |
Model tier for OCR | g1-vlm |
CLASSIFY_LLM_MODEL |
Model for document classification | g1-llm-turbo |
CONTEXTUAL_CHUNKING_ENABLED |
LLM-enriched chunk context | true |
CONTEXTUAL_LLM_MODEL |
Model for chunk enrichment | g1-llm-micro |
GRAPH_DB_PASSWORD |
Neo4j database password | — |
FALKORDB_PASSWORD |
FalkorDB auth password | — |
🏠 Multi-Tenant Isolation
| Service | Parameter | Default |
|---|---|---|
| graphiti-mcp | group_id |
main |
| reasoning-tools | group_id |
main |
| knowledge-mcp | tenant_id |
shared |
Group ID format uses dashes only (colons rejected): main, tenant-{id}, org-{org_id}-user-{user_id}
🚀 Quick Start
# Memory stack (managed by Coolify)
cd /data/coolify/services/z0ww84wkwwss8sw4kgsw88gc
docker compose up -d
# Knowledge stack (locally-built images)
cd /data/coolify/services/kp3basi7wsdztrq1fgm7t543
docker build -t git.generate.one/generate-one/knowledge-worker:latest worker/
docker build -t git.generate.one/generate-one/knowledge-mcp:latest .
docker compose up -d
# Health checks
curl https://memory.generate.one/health
Note: knowledge-mcp and knowledge-worker use locally-built images with
pull_policy: never. Always rebuild beforedocker compose up -d.
🔗 Dependencies
Depends on:
- g1-llm — Qdrant (vector storage), LiteLLM (inference for classification, enrichment, query rewriting)
Depended on by:
- g1-mcp — graphiti-mcp and reasoning-tools connect to memory/knowledge backends
- g1-gpt — LibreChat file search routes through knowledge-mcp RAG compatibility layer
🔗 Related Repos
| Repo | Relationship |
|---|---|
| g1-llm | Qdrant + LiteLLM for embeddings and inference |
| g1-mcp | graphiti-mcp, reasoning-tools hosted in mcp-stack |
| g1-gpt | LibreChat file search via knowledge-mcp |
| g1-core | Valkey for task queueing |
🛡️ Part of Generate One
Generate One — AI infrastructure that answers to you.
Self-hosted, sovereign AI platform. generate.one
Licensed under AGPL-3.0.