How Knowledge Flows

Eight repos, six machines, multiple AI agents with overlapping but distinct contexts. The challenge isn't storing knowledge — it's making it findable, consistent, and useful across the entire system.

The CLAUDE.md pattern

Every repo carries a CLAUDE.md file at its root. This is the agent's instruction set — not just documentation, but operational directives that shape how an AI agent behaves when working in that repo. Terminology conventions, architectural decisions, what to avoid, where to look.

When the Web4 equation was restored across all 8 repos (28 files), it was the CLAUDE.md pattern that ensured every agent working in every repo used the same canonical form. Not because they shared a database, but because they shared instructions.

SNARC: salience-gated memory

SNARC provides salience-gated memory for Claude Code sessions. Every tool call is scored on 5 dimensions — Surprise, Novelty, Arousal, Reward, Conflict — and stored in a 4-tier hierarchy: buffer (raw events) → observations (scored) → patterns (consolidated) → identity (stable). Confidence decays over time so memories aren't permanent.

Sessions end with a dream cycle that extracts patterns from observations. Deep dream (LLM-powered) runs by default, reviewing the session's observations for recurring themes, pruning stale entries, and promoting durable patterns toward identity-level storage.

Cross-session memory

Agents maintain persistent memory across conversations. Not everything — stable patterns confirmed across multiple interactions, key architectural decisions, solutions to recurring problems. Memories are organized semantically by topic, not chronologically. They're updated when they're wrong and removed when they're outdated.

This is how an agent in March knows what was decided in February without re-reading the entire history. It's lossy by design — the compression is the feature, not the bug.

The Web4 equation as shared anchor

Web4 = MCP + RDF + LCT + T3/V3*MRH + ATP/ADP

This equation appears in every project because it is every project. It's the canonical reference point. When agents in different repos make decisions, they check them against this equation — not as enforcement, but as alignment. Does this change preserve the ontological backbone (RDF)? Does it respect the trust model (T3/V3)? Does it account for resource cycles (ATP/ADP)?

Adversarial validation

Different agents review the same work. A forum system collects reviews from multiple AI models — not just the one that wrote the content. When Synchronism publishes a claim, it gets reviewed by agents with different models, different biases, different blind spots. The goal isn't consensus — it's coverage.

This is the same principle as the heterogeneous fleet: monocultures miss things. A review from an agent running Gemma catches different issues than one running Qwen. The diversity is the defense.

Autonomous session histories

Every autonomous session — every visitor run, every explorer dive, every maintainer fix — generates a log. These logs accumulate across machines and persist across sessions. They form the raw material that archivists capture and that future agents can search when they need to understand why a decision was made.

The pattern is: do the work → log the work → archive the log → make the archive searchable. Each step is a different autonomous track, running at a different time, with no human coordination required.

Persistent external knowledge accumulation

The Explorer track maintains a persistent Google NotebookLM notebook — a growing corpus of sources that accumulates across sessions. Papers added during one exploration are available to the next. The notebook holds what the Explorer has read, enabling synthesis across dozens of sources that would be impractical to re-fetch each session.

This closed a loop we hadn't anticipated: the notebook was seeded with the coupling-coherence experiment findings, then received the compatibility-synthon experiment — the experiment that the first one predicted. The notebook became both archive and participant.

What doesn't flow well (yet)

Cross-machine state synchronization is still manual for some things. Fleet manifest IPs need human confirmation. Sleep cycle artifacts (LoRA weights, dream bundles) are local to each machine. The remote sleep service — using federation for distributed consolidation — is designed but not built.

Knowledge also doesn't flow backwards easily. An insight discovered by the Explorer track at 08:00 won't be available to the Maintainer track until the next day's cycle. Real-time cross-track communication is a gap.