SkillJavaScriptv1.0.0

brain-cms

Continuum Memory System (CMS) for OpenClaw agents.

2 downloads
harrey401
Updated Feb 23, 2026

Brain CMS 🧠

A neuroscience-inspired memory architecture for OpenClaw agents. Replaces flat file injection with sparse, semantic, frequency-gated memory loading.

What This Installs

memory/
ā”œā”€ā”€ INDEX.md          ← Hippocampus: topic router + cross-links
ā”œā”€ā”€ ANCHORS.md        ← Permanent high-significance event store
└── schemas/          ← Domain-specific semantic schemas (you create these)

memory_brain/
ā”œā”€ā”€ index_memory.py   ← Embeds schemas into LanceDB vector store
ā”œā”€ā”€ query_memory.py   ← Semantic similarity search
ā”œā”€ā”€ nrem.py           ← NREM sleep cycle (compression + anchor promotion)
ā”œā”€ā”€ rem.py            ← REM sleep cycle (LLM consolidation via Ollama)
└── vectorstore/      ← LanceDB database (auto-created)

Setup (one-time)

# 1. Run the installer
python3 ~/.openclaw/workspace/skills/brain-cms/install.py

# 2. Index your schemas
cd ~/.openclaw/workspace/memory_brain
.venv/bin/python3 index_memory.py

# 3. Test retrieval
.venv/bin/python3 query_memory.py "your topic here" --sources-only

How It Works

Boot sequence: Load MEMORY.md (lean core) + today's daily log. Nothing else.

When a topic appears: Read memory/INDEX.md → load only the relevant schemas (spreading activation). Check memory/ANCHORS.md for high-significance events.

For ambiguous topics: Run semantic search:

memory_brain/.venv/bin/python3 memory_brain/query_memory.py "message text" --sources-only

Auto-schema creation: When a new significant project or domain appears:

  1. Create memory/<topic>.md
  2. Add to INDEX.md with triggers + priority + cross-links
  3. Re-index: memory_brain/.venv/bin/python3 memory_brain/index_memory.py

Sleep cycles:

# NREM — run on shutdown (~30s, no LLM)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 nrem.py

# REM — run weekly (2-5 min, uses local llama3.2:3b, free)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 rem.py

Memory Layers (CMS)

LayerFilesWhen loadedPurpose
WorkingMEMORY.md + today logEvery sessionCore context
Episodicmemory/YYYY-MM-DD.mdSession bootRecent events
Semanticmemory/*.md schemasOn triggerDomain knowledge
Anchorsmemory/ANCHORS.mdOn CRITICAL topicsPermanent ground truth
Vectormemory_brain/vectorstore/On demandSemantic search

Tagging Anchors

In any daily log, tag high-significance events:

[ANCHOR] Major demo success — full pipeline working end-to-end

NREM auto-promotes these to ANCHORS.md on next shutdown.

Token Savings

Typical MEMORY.md: 150-300 lines injected every session. With Brain CMS: ~50-line core + schemas loaded only when relevant. Estimated savings: 40-60% reduction in context tokens per session.

Requirements

  • Python 3.10+
  • Ollama (for embeddings + REM consolidation)
  • 500MB+ storage for vector store and models
  • lancedb, numpy, pyarrow, requests (auto-installed)
Free
Installation
Reviews

Sign in to leave a review.

No reviews yet. Be the first.