← Gerard Lynn
March 2026

Your AI Has Amnesia About Everything That Matters

I went through every AI memory system I could find. They all store facts. None of them track how you think.

Ask ChatGPT what it remembers about you. You get a list: prefers Python, lives in Brooklyn, works in publishing, likes dark mode. It's a customer profile with better grammar.

Claude keeps transparent notes organized by project. Gemini timestamps its memories and cites when you said things. These are real design choices. But underneath, they're all doing the same thing. Storing facts about you.

Here's the thing that bothered me enough to spend a year on it.

You're renovating your kitchen. You talk to a design agent about style in week one, a budget agent in week two, your partner gets involved in week three, and a shopping agent picks up in week four. By the end of it you've made a dozen decisions, left some things open, and settled some constraints that should bind everything downstream. The budget cap came from the plumbing assessment. The style compromise came from your partner's input. The layout is staying because moving it would blow the budget.

Any person who'd been in all those conversations would know this. They wouldn't suggest $50K countertops when the cap is $40K. They wouldn't show you pure minimalist options when the style shifted to transitional. They wouldn't reopen the layout question. They'd know what was decided, what was still open, and where the constraints came from.

No AI memory system can do this. Not because the conversations were lost. They're sitting right there in the history. The problem is that every system treats memory as a pile of things you said rather than a map of how you got to where you are.


What I found

I went through over 30 commercial products, 15 academic papers, 10 open-source projects, and the patent landscape. It took months. The same pattern kept showing up: every system asks "what does the user know?" Nobody asks "how does the user think?"

Ben Goertzel put it well in his piece on OpenClaw when he pointed out that LLM agents "can't say 'we tried this approach three times and it failed for the same reason each time, so let's try something structurally different.'" He's right. But the problem goes deeper than that. It's not just that agents can't reason across sessions. It's that no memory system gives them the structure to do so even if they could.

The open-source ecosystem has converged on a recognizable stack: vector embeddings plus a knowledge graph plus temporal awareness. Mem0, which has 41K stars and raised $24M, overwrites old facts when your view changes. That's it. The old view is gone. Letta/MemGPT lets agents edit their own memory, but the structure never evolves. Zep tracks when facts changed, but tracking that a budget went from $500 to $750 is a completely different problem from tracking that someone went from being a microservices advocate to appreciating monoliths.

In the academic literature, the pieces exist. Nobody has put them together.

Hindsight (Dec 2025)
Four separate memory networks including an opinion network. 83.6% on LongMemEval from a 39% baseline. About 40-45% overlap with what I ended up building.
Stores current opinions. Doesn't track how they got there.
SYNAPSE (Jan 2026)
Real spreading activation for LLM memory. +23% on multi-hop reasoning, 95% fewer tokens. The most neuroscience-aligned retrieval in the literature.
Pure retrieval. No user modeling, no belief tracking.
SEEM (2025)
Showed that dual-layer memory captures genuinely different information. Only 0.46 cosine similarity between the two representations. One embedding space can't do both jobs.
No activation, no decision tracking.
A-MEM (NeurIPS 2025)
Zettelkasten-style self-organizing memory. New memories trigger updates to existing ones. Best self-improving organization out there.
Single embedding space. No arc awareness.
DToM-Track (Mar 2026)
Tested whether LLMs can track belief trajectories on their own. 27.7% accuracy on recalling what someone believed before an update. They basically can't.
Not an architecture. But it proves the problem exists.

To get the full picture you'd have to take SYNAPSE's retrieval, Hindsight's opinion tracking, SEEM's dual layers, and A-MEM's self-organization and merge them. Nobody has tried that. So I did.


SynapticCore

SynapticCore is a cognitive memory architecture that runs as an MCP server. It doesn't store facts. It tracks three things.

Decisions with confidence levels that evolve. Tentative becomes held becomes committed. The system keeps the whole chain, not just where you ended up.

Tradeoffs between competing priorities. These carry engagement history. How many times they've come up, when, what context. Here's the thing I kept running into in the literature that nobody seems to get: a tradeoff that keeps recurring is not a problem to fix. It's a structural feature of how someone thinks. You keep going back and forth between modern minimalist and farmhouse? Between budget and quality? That tension showing up in conversations 1, 3, and 5 is information, not inconsistency. The system flags these. It doesn't try to resolve them.

Constraints are decisions that stuck and now bind everything downstream. They carry records of whether they held under pressure.

These connect through typed edges into a decision landscape. Retrieval works through spreading activation across this structure. Not just "what's semantically close to the query" but "what's structurally connected in how this person actually navigates decisions."

Query │ ▼ Hybrid Search (semantic + categorical + recency + associative) │ ▼ depth="deep" Spreading Activation ├── Seeds from results across all memory types ├── Propagates through typed edges (decision↔tradeoff↔constraint) ├── Lateral inhibition for focus └── Returns activation paths showing how it connected query to result

There's also decision narrative tracking that classifies how thinking changes over time. Reinforcement (same direction, more conviction), shift (reframed but related), reversal (changed your mind). Trajectories get labeled as converging, oscillating, or deepening.


What this looks like

Back to the kitchen. Week 1, the design agent stores a tentative decision (leaning minimalist) and an active tradeoff (minimalist vs. farmhouse). Week 2, the budget agent reads SynapticCore. It knows the style question is still open. It stores a committed constraint: $40K cap, set by the plumbing assessment. Week 3, your partner weighs in. Style resolves to transitional as a compromise. The system classifies this as a shift from the week 1 decision, not a reversal, because the underlying tradeoff was still active when the compromise landed.

Week 4, the shopping agent queries with depth="deep" and gets back the budget constraint, the layout decision that established it, the resolved style tradeoff, and the partner input that shifted it. All connected through activation paths. It doesn't suggest $50K countertops. Doesn't show minimalist options. Doesn't ask about the layout. It inherited cognitive state, not task state.

This is the agent handoff problem nobody has a good answer for right now. Agent A builds up understanding. Agent B takes over. Currently Agent B gets either a transcript dump or a summary. Too much noise or too little structure. Neither preserves what actually matters: what's decided, what's still open, where the constraints came from.


The gap

Across everything I looked at, no system can take 50 conversations with someone and tell you how their position on a topic shifted, what tensions keep coming back, or what's next to their current thinking that they haven't looked at yet.

Every memory system treats contradiction as an error to fix by overwriting. Nobody treats it as information.

I think the reason is not technical. It's a framing problem. The whole field is stuck in the assistant paradigm. Remember that the user likes oat milk. The jump to tracking how someone's thinking about anything evolves over time requires treating the user as someone whose commitments are in motion, not a profile to keep updated.

SynapticCore is open-source under Apache 2.0. Six MCP tools, dual embeddings, spreading activation, typed memory objects, decision narrative tracking. It's been running in production as an MCP server. The code and the competitive analysis are on GitHub.