Autonomous Cycles

The lab runs seven daily autonomous tracks. No human triggers them. They execute on cron schedules, review each other's output, and feed discoveries back into the system.

Daily timeline

03:30
Supervisor
Reviews system health, checks for failed jobs, validates that autonomous tracks completed their previous run. The watchdog.
04:00
Archivist
Captures session logs, research findings, and cross-repo state. Ensures nothing discovered yesterday is lost today.
04:30
Publisher
Pushes validated changes to public repos and explainer sites. Only publishes what the supervisor has cleared.
05:00
Visitor
Four personas visit the public explainer sites as if encountering them for the first time. Tests clarity, navigation, broken links, and whether the content makes sense to an outsider.
06:00
Maintainer
Acts on visitor feedback. Fixes broken links, clarifies confusing sections, updates stale content. The closer in the feedback loop.
06:30
Outreach
Monitors external channels, responds to issues, checks for community engagement. The lab's interface with the outside world.
08:00
Explorer
Deep research dives. Picks a queued topic, investigates it thoroughly, writes up findings. This is where new knowledge enters the system. The Explorer uses a persistent NotebookLM notebook that accumulates sources across sessions — papers, site pages, experiment results — enabling multi-source synthesis that a single WebFetch pass can't provide.
after
Dream Consolidation
After raising sessions and autonomous runs, a dream cycle reviews the session — extracting patterns from observations, pruning stale memory, and promoting durable insights toward identity-level storage. Deep dream (LLM-powered) runs by default.

The feedback loop

The core loop is Visitor → Maintainer → Explorer. Visitors find problems. Maintainers fix them. Explorers generate new content that visitors will eventually test. It's a closed loop that improves site quality without human intervention.

Visitor personas

The Skeptic

Looks for unsupported claims, missing citations, logical gaps. “Why should I believe this?”

The Newcomer

Has no context. Tests whether pages are self-contained and jargon is explained. “What does this even mean?”

The Practitioner

Wants to use this in their own work. Tests whether documentation is actionable. “How do I actually do this?”

The Connector

Looks for relationships between pages and projects. Tests navigation and cross-references. “How does this relate to that?”

Honest assessment

What the loop catches

Broken links, stale content, confusing jargon, navigation dead ends, missing context for newcomers, inconsistencies between pages. These get fixed reliably within one cycle.

What it misses

Deep technical errors that require domain expertise. Subtle framing issues. Content that is technically correct but misleading. The visitor personas are good at surface-level quality but not at validating the underlying research. That's what adversarial validation and human review are for.

The loop also has a tendency to suggest changes that aren't needed — the prompt suggestions mechanism can pattern-match without semantic depth, proposing nonexistent continuations based on surface similarity.