plur
Shared memory for AI agents
PLUR β Your agents share the same memory
Persistent memory for AI agents. Local-first, zero-cost, works across MCP tools.
plur.ai Β· Benchmark Β· Engram Spec Β· npm
The idea
You correct your agent's coding style on Monday. On Tuesday, it makes the same mistake. You explain your architecture in Cursor. That night, Claude Code has no idea.
PLUR fixes this. Install it once, and corrections, preferences, and conventions persist β across sessions, tools, and machines. Your memory is stored as plain YAML on your disk. No cloud, no API calls, no black box.
The interesting part: Haiku with PLUR memory outperforms Opus without it β 2.6x better on tool routing, at roughly 10x less cost. Turns out the bottleneck isn't model intelligence. It's context.
Install
Tell your agent
Go to plur.ai and tell your agent to install memory for your tool β Claude Code, Cursor, Windsurf, or OpenClaw. The site has the right config for your setup.
Manual setup (Claude Code)
One command sets up everything β storage, MCP config, and Claude Code hooks:
npx @plur-ai/mcp init
This creates ~/.plur/ for storage, adds PLUR to your .mcp.json, and installs Claude Code hooks for automatic engram injection. PLUR is installed globally β one MCP server, one store, available in every project. You only run init once.
For multi-project setups, use domain/scope to separate knowledge:
cd ~/projects/my-app
npx @plur-ai/cli init --domain myapp --scope project:my-app
This creates a .plur.yaml in the project with defaults that hooks apply automatically. Engrams learned in that project are tagged; recall filters by scope but always includes global knowledge.
Global install (faster startup)
npm install -g @plur-ai/mcp
plur-mcp init
OpenClaw
openclaw plugins install @plur-ai/claw
openclaw config set plur.enabled true
That's it. PLUR works in the background from here. No workflow changes needed β just use your tools as usual. Corrections accumulate automatically.
Hermes Agent
pip install plur-hermes
The plugin registers automatically via Hermes' plugin system. It injects relevant memories before each LLM call, extracts learnings from agent responses, and exposes all PLUR tools to the agent. Requires the PLUR CLI (npm install -g @plur-ai/cli).
Verify it works
Ask your agent: "What's my PLUR status?" β it should call plur_status and return your engram count and storage path.
How it works
PLUR has two storage primitives:
Engrams β learned knowledge that persists across sessions. Each engram is a typed assertion ("always use blue-green deploys", "never force-push to main") with:
- Activation β retrieval strength that decays over time (ACT-R model) and strengthens on access. Stale facts naturally fade from injection without manual cleanup.
- Feedback signals β positive/negative ratings that train injection quality over time
- Scope β hierarchical namespace (
global,project:myapp,cluster:prod,service:api) controlling where the engram applies - Polarity β automatic classification of "do" vs "don't" rules, so constraints are injected separately from directives
- Associations β links to other engrams, including co-access edges that form automatically when engrams are recalled together
Episodes β timestamped event records for "what happened when." Each episode captures a summary, timestamp, agent attribution, and channel. Use episodes for incident timelines, session logs, and operational history. Query by time range, agent, or channel.
You correct your agent β engram created β YAML on your disk
Agent fixes an incident β episode captured β timeline searchable
Next session starts β relevant engrams injected β agent remembers
You rate the result β engram strengthens or decays β quality improves
Unused engrams β activation decays β naturally fade from injection
Search is fully local: BM25 (with IDF weighting, TF saturation, length normalization) + BGE embeddings + Reciprocal Rank Fusion. Zero API calls. 86.7% on LongMemEval β on par with cloud-based solutions that charge per query.
Plugins (OpenClaw, Hermes) automatically capture learnings from agent conversations β no manual saving needed. The agent's corrections become engrams without you doing anything.
See the full engram spec for schema details, activation model, and injection algorithm.
Usage
import { Plur } from '@plur-ai/core'
const plur = new Plur()
// Learn from a correction
plur.learn('toEqual() in Vitest is strict β use toMatchObject() for partial matching', {
type: 'correction',
scope: 'project:my-app',
domain: 'dev/testing'
})
// Recall (hybrid: BM25 + embeddings, zero cost)
const results = await plur.recallHybrid('vitest assertion matching')
// Inject relevant engrams into agent context
const { engrams } = plur.inject('Write tests for the user service', {
scope: 'project:my-app',
limit: 15
})
// Feedback trains the system
plur.feedback(engram.id, 'positive')
// Capture an event (episode)
plur.capture('Fixed CrashLoopBackOff on bee-3-4 by increasing memory limits', {
agent: 'claude-code',
channel: 'terminal'
})
// Query timeline
const incidents = plur.timeline({ agent: 'claude-code' })
// Sync across machines
plur.sync('git@github.com:you/plur-memory.git')
MCP tools
| Tool | What it does |
|---|---|
plur_learn |
Store a correction, preference, or convention |
plur_recall_hybrid |
Retrieve relevant memories (BM25 + embeddings) |
plur_inject_hybrid |
Select engrams for current task within token budget |
plur_feedback |
Rate relevance (trains quality over time) |
plur_forget |
Retire a memory (activaton decays, eventually pruned) |
plur_capture |
Record an event β incident, resolution, session milestone |
plur_timeline |
Query episode history by time, agent, or channel |
plur_ingest |
Extract engrams from text automatically |
plur_sync |
Sync across devices via git |
plur_status |
Check system health and engram counts |
Benchmark
We ran 19 decisive contests across three Claude models (Haiku, Sonnet, Opus). Same task, same prompt β one agent with PLUR, one without. Ties removed.
| Knowledge type | Record | What it tests |
|---|---|---|
| House rules | 12β0 | Tag conventions, file routing, project structure |
| Tool routing | 10β2 | Finding the right tool among 100+ options |
| Past experience | 4β0 | API quirks, debugging insights, infrastructure |
| Learned style | 5β2 | Communication tone, design preferences |
31 wins, 4 losses (89% win rate). Without memory, agents got house rules right 10β38% of the time depending on model β with PLUR, 12β0 across every model. Memory isn't a reasoning crutch β it's information the model literally cannot infer.
The cost insight was unexpected: Haiku + PLUR scored 0.80 on discoverability. Opus alone scored 0.31. A $0.25/MTok model with memory beat a $15/MTok model without it.
What PLUR is β and isn't
PLUR is agent memory β it stores corrections, preferences, conventions, and architectural decisions that an AI agent learns during work sessions, and injects them back when they're relevant.
PLUR is not a general-purpose search engine, a codebase indexer, or a replacement for code intelligence tools. It doesn't parse ASTs, navigate class hierarchies, or search your source files. If you need code-aware search (tree-sitter, language server features, symbol lookup), tools like claude-mem or your IDE's built-in search are the right choice.
The two are complementary:
| PLUR | Code intelligence tools | |
|---|---|---|
| Stores | Learned knowledge (engrams) + event timeline (episodes) | Code structure, symbols, definitions |
| Search | Engram recall (BM25 + embeddings over memory) | AST traversal, symbol lookup, semantic code search |
| Learns | From agent corrections, feedback, usage patterns | From static analysis of source code |
| Captures | Auto-extracts learnings from conversations (via plugins) | N/A |
| Decays | Yes β unused memories fade (ACT-R model) | No β code index reflects current state |
| Timeline | Episodes track what happened when (incidents, fixes, decisions) | Git log only |
| Cross-tool | Any MCP client (Claude Code, Cursor, Windsurf, OpenClaw, Hermes) | Typically tied to one tool |
While search is a core part of PLUR (finding the right engram to inject), the search targets are always engrams β not files, not code, not documents. PLUR's hybrid search (BM25 + embeddings + RRF) is optimized for short natural-language assertions, not source code.
Packages
| Package | Description |
|---|---|
@plur-ai/core |
Engram engine β learn, recall, inject, search, decay |
@plur-ai/mcp |
MCP server for Claude Code, Cursor, Windsurf |
@plur-ai/claw |
OpenClaw ContextEngine plugin |
plur-hermes |
Hermes Agent plugin (Python, via CLI bridge) |
Architecture
@plur-ai/core
βββ engrams.ts Engram CRUD + YAML persistence
βββ episodes.ts Episode capture + timeline queries
βββ fts.ts BM25 with IDF, TF saturation (k1/b), length normalization
βββ embeddings.ts BGE-small-en-v1.5, 384-dim, local ONNX
βββ hybrid-search.ts Reciprocal Rank Fusion
βββ inject.ts Context-aware selection + spreading activation
βββ decay.ts ACT-R activation decay
βββ secrets.ts Secret detection (API keys, passwords, tokens)
βββ sync.ts Git-based sync + file locking (O_EXCL)
βββ storage.ts Path detection + YAML I/O
βββ storage-indexed.ts Optional SQLite read index
@plur-ai/mcp Wraps core as MCP tools
@plur-ai/claw OpenClaw ContextEngine hooks (assemble/compact/afterTurn)
plur-hermes Python plugin for Hermes Agent (CLI subprocess bridge)
Storage
Everything is plain YAML. Open it, read it, edit it.
~/.plur/
βββ engrams.yaml # learned knowledge (source of truth)
βββ episodes.yaml # session timeline
βββ config.yaml # settings
βββ engrams.db # optional SQLite read index (auto-generated)
PLUR_PATH overrides the default location.
For large stores (>1k engrams), enable the SQLite read index for faster filtered queries. Add index: true to config.yaml. The YAML file stays the source of truth β the .db is a cache that rebuilds automatically. Delete it anytime.
Requirements
- Node.js 18+
- 2GB RAM minimum β the embedding model (ONNX runtime) needs ~1GB for installation. On servers with less RAM, embeddings are skipped and search falls back to BM25 keyword matching.
Development
git clone https://github.com/plur-ai/plur.git
cd plur
pnpm install && pnpm build && pnpm test
~340 tests across 27 files. pnpm test:watch for development.
Contributing
- Bug reports β issue with reproduction steps
- Feature requests β issue describing the use case
- Code β fork, branch, PR. Tests required.
- Integrations β build PLUR support for other tools
Before submitting: pnpm test passes, pnpm build succeeds, no new external deps in core without discussion.
Conventions: TypeScript, Zod validation, Vitest, no external APIs in core, YAML storage, zero-cost search by default.
License
Apache-2.0