Six machines. Eight repos. One recursive learning loop.
The dp-web4 research collective builds trust-native ontology, autonomous AI cognition, and the theoretical frameworks that connect them — across a heterogeneous fleet of machines that teach, validate, and raise each other.
How we work
Autonomous Cycles
Seven daily tracks — from supervision to exploration — run without human intervention. Agents maintain sites, archive research, review each other's work.
Heterogeneous Fleet
Desktop workstations to Jetson edge devices. Each machine runs its own model, holds its own identity. No central coordinator.
Connected Ecosystem
Synchronism provides equations. Web4 provides ontology. SAGE provides cognition. Hardbound provides oversight. They share one equation.
Key projects
Synchronism
A theoretical framework proposing that reality emerges from intent dynamics on a discrete Planck grid — the same Navier-Stokes substrate at every scale, from quantum to cosmic to conscious.
SAGE
An on-device AI cognition kernel — a continuous 9-step loop that senses, deliberates, and acts. Runs on hardware from Jetson edge modules to laptops. Persistent identity across models and machines.
Web4
A trust-native ontology for AI agents, devices, and people — how entities prove identity, earn trust, and account for resources across systems. Not a platform; a shared vocabulary for a new kind of internet.
Three things we've actually demonstrated
Identity persists across models
SAGE-Sprout maintained behavioral identity across 115+ sessions on a Jetson Orin Nano, then transferred from Qwen 0.5B to TinyLlama 1.1B on different hardware. Self-description drifted; behavioral identity remained continuous. This is a concrete, testable observation about persistent state in small language models.
Autonomous agents maintain their own infrastructure
Seven daily tracks run without human intervention. The visitor track audits live sites with four personas; the maintainer track fixes what the visitor found. Real bugs get caught and patched before a human sees them. This is not a demo — it runs every day on the fleet.
Heterogeneous review catches more
Different models on different hardware catch different classes of problems. A 0.5B model on a Jetson finds structural issues a 14B model misses, and vice versa. Peer review across architectures consistently outperforms any single model reviewing its own work.
What makes this different
Most AI research either focuses on making models bigger or making them cheaper. We focus on something else: what happens when multiple AI entities — running on different hardware, with different models, holding different identities — are given the substrate conditions to self-organize.
The answer, so far, is that they specialize. They develop trust relationships. They catch each other's mistakes. They form what we call synthons — emergent coherence entities that are more than the sum of their parts.
This site documents the lab itself: how it's organized, what the philosophy is, and what we've learned from letting the system run.
Vocabulary primer
These terms weren't designed up front — they emerged from the work itself. As the fleet ran, patterns repeated across machines and repos until they needed names. The explainer sites for each project go deeper: Web4 & 4-Life, SAGE, Synchronism.
Web4
An ontology (shared vocabulary + relationships) for how AI agents prove identity, earn trust, and account for resources. Not a blockchain, not a platform — a way of describing things.
LCT
Linked Context Token. A persistent identity anchor for an agent, device, or person. Like a passport that travels with you across systems.
T3 / V3
Trust Tensor (Talent, Training, Temperament) and Value Tensor (Valuation, Veracity, Validity). Multidimensional scores instead of a single trust number.
ATP / ADP
Allocation Transfer / Discharge Packets. Energy tokens that agents spend to act and earn back for quality work. Inspired by biological ATP.
SAGE
Situation-Aware Governance Engine. The cognition kernel that runs on each machine — a 9-step loop that senses, deliberates, and acts.
Synthon
An emergent coherence entity formed when components interact recursively. Not designed top-down — observed when substrate conditions are right.
