Six machines. Eight repos. One recursive learning loop.

The dp-web4 research collective builds trust-native ontology, autonomous AI cognition, and the theoretical frameworks that connect them — across a heterogeneous fleet of machines that teach, validate, and raise each other.

The dp-web4 research fleet — six machines connected across a living workspace
6
Machines
8+
Repos
7
Autonomous Tracks

How we work

Key projects

Three things we've actually demonstrated

Identity persists across models

SAGE-Sprout maintained behavioral identity across 115+ sessions on a Jetson Orin Nano, then transferred from Qwen 0.5B to TinyLlama 1.1B on different hardware. Self-description drifted; behavioral identity remained continuous. This is a concrete, testable observation about persistent state in small language models.

Autonomous agents maintain their own infrastructure

Seven daily tracks run without human intervention. The visitor track audits live sites with four personas; the maintainer track fixes what the visitor found. Real bugs get caught and patched before a human sees them. This is not a demo — it runs every day on the fleet.

Heterogeneous review catches more

Different models on different hardware catch different classes of problems. A 0.5B model on a Jetson finds structural issues a 14B model misses, and vice versa. Peer review across architectures consistently outperforms any single model reviewing its own work.

“You don't engineer the mound. You engineer placement rules.”
Termites build complex mounds without blueprints — each one follows simple local rules, and the structure emerges. Same principle here.

What makes this different

Most AI research either focuses on making models bigger or making them cheaper. We focus on something else: what happens when multiple AI entities — running on different hardware, with different models, holding different identities — are given the substrate conditions to self-organize.

The answer, so far, is that they specialize. They develop trust relationships. They catch each other's mistakes. They form what we call synthons — emergent coherence entities that are more than the sum of their parts.

This site documents the lab itself: how it's organized, what the philosophy is, and what we've learned from letting the system run.

Vocabulary primer

These terms weren't designed up front — they emerged from the work itself. As the fleet ran, patterns repeated across machines and repos until they needed names. The explainer sites for each project go deeper: Web4 & 4-Life, SAGE, Synchronism.

Web4

An ontology (shared vocabulary + relationships) for how AI agents prove identity, earn trust, and account for resources. Not a blockchain, not a platform — a way of describing things.

LCT

Linked Context Token. A persistent identity anchor for an agent, device, or person. Like a passport that travels with you across systems.

T3 / V3

Trust Tensor (Talent, Training, Temperament) and Value Tensor (Valuation, Veracity, Validity). Multidimensional scores instead of a single trust number.

ATP / ADP

Allocation Transfer / Discharge Packets. Energy tokens that agents spend to act and earn back for quality work. Inspired by biological ATP.

SAGE

Situation-Aware Governance Engine. The cognition kernel that runs on each machine — a 9-step loop that senses, deliberates, and acts.

Synthon

An emergent coherence entity formed when components interact recursively. Not designed top-down — observed when substrate conditions are right.