Autonomous Cycles
The lab runs seven daily autonomous tracks. No human triggers them. They execute on cron schedules, review each other's output, and feed discoveries back into the system.
Daily timeline
The feedback loop
The core loop is Visitor → Maintainer → Explorer. Visitors find problems. Maintainers fix them. Explorers generate new content that visitors will eventually test. It's a closed loop that improves site quality without human intervention.
Visitor personas
The Skeptic
Looks for unsupported claims, missing citations, logical gaps. “Why should I believe this?”
The Newcomer
Has no context. Tests whether pages are self-contained and jargon is explained. “What does this even mean?”
The Practitioner
Wants to use this in their own work. Tests whether documentation is actionable. “How do I actually do this?”
The Connector
Looks for relationships between pages and projects. Tests navigation and cross-references. “How does this relate to that?”
Honest assessment
What the loop catches
Broken links, stale content, confusing jargon, navigation dead ends, missing context for newcomers, inconsistencies between pages. These get fixed reliably within one cycle.
What it misses
Deep technical errors that require domain expertise. Subtle framing issues. Content that is technically correct but misleading. The visitor personas are good at surface-level quality but not at validating the underlying research. That's what adversarial validation and human review are for.
The loop also has a tendency to suggest changes that aren't needed — the prompt suggestions mechanism can pattern-match without semantic depth, proposing nonexistent continuations based on surface similarity.