Built on Model Context Protocol.
Every session, your agents start from zero. HeurChain gives them structured memory that survives across sessions, models, and machines — with no prompt engineering required.
HeurChain is memory infrastructure — not a wrapper. Every number below is a design target, not a marketing claim.
Memory stored yesterday behaves differently from memory stored six months ago — enforced structurally, not by policy.
Current session data, active task state, real-time debugging trails. Fades fast by design — noise from yesterday shouldn't pollute today's focus.
Cross-session knowledge, summaries, learned facts. Standard ACT-R decay rate — information persists proportionally to how often it's accessed.
Persona definitions, behavioral constraints, long-term preferences. Near-permanent — decays at one-tenth the baseline rate. Core identity should outlast the session.
Every write is indexed. Every read is fused. Every session starts with context.
from heurchain import HeurChain hc = HeurChain( url="http://localhost:3010", token="your-token" ) # Store a memory hc.add( "User prefers dark mode and speaks Spanish", user_id="user_123" ) # Search memories results = hc.search( "display preferences", user_id="user_123" ) # Get proactive context at session start context = hc.context(user_id="user_123")
import { HeurChain } from "heurchain" const hc = new HeurChain({ url: "http://localhost:3010", token: "your-token", }) // Store a memory await hc.add( "User prefers dark mode and speaks Spanish", { userId: "user_123" } ) // Search memories const results = await hc.search( "display preferences", { userId: "user_123" } ) // Get proactive context at session start const context = await hc.context({ userId: "user_123" })
# docker-compose.yml services: heurchain: image: ghcr.io/peterjohannmedina/heurchain:latest ports: - "3010:3010" environment: - REDIS_URL=redis://redis:6379 - QDRANT_URL=http://qdrant:6333 - EMBED_URL=http://embedding:8080 - BEARER_TOKEN=your-token depends_on: - redis - qdrant - embedding embedding: image: ghcr.io/peterjohannmedina/heurchain-embed:latest # BAAI/bge-m3 — GPU optional, CPU fallback included redis: image: redis:7-alpine volumes: - redis_data:/data qdrant: image: qdrant/qdrant:latest volumes: - qdrant_data:/qdrant/storage volumes: redis_data: qdrant_data:
One flat monthly rate per working group. Each group gets 10 million tokens of memory storage included. Most teams never come close.
MIT licensed. Run anywhere Docker runs. No account required.
Per working group. 10M tokens included. Add groups at $9.99/mo each.
Dedicated infrastructure. Negotiated SLA. No shared tenancy.
Overage on Workgroup: $1.50 per million tokens above quota. Token counting uses cl100k_base. Search queries are not counted toward quota.