Skip to main content

Companion Addendum to Paper v21.0

Substrate landings since v20.0 publication () through .

Why this addendum exists

The FDRP paper is at v21.0 (publication date 16 April 2026). PDF-era evidence is frozen at the v20.0 cut (snapshot 2 April 2026). The substrate has continued to evolve since. This addendum is a companion to the current v21.0 paper — not a plan for a future revision — that surfaces the substantive substrate landings shipped between paper revisions, so the public state-of-system stays current.

This addendum surfaces substrate landings 2026-04-16 → 2026-05-10 — the window between v21.0 publication and the current operational snapshot timestamp. Everything below is operational at the snapshot timestamp shown in the operational-substrate section.

Since v20.0 publication: six substantive waves

Between v20.0 and 2026-05-10, the substrate received the following material developments. Full chronological detail is in the timeline.

  • Wave R10 cluster — Kaggle-First Compute Policy, Pipeline Watchdog, Sub-Claude Resurrection daemon, Cross-Comp Routing, Pre-Stage and Cross-Model Strategy crons.
  • Wave R11 — GPT-5.5-leader cross-model verification protocol; PEAR with N≥3 independent models became the default for non-trivial decisions.
  • Wave R20 — the Computational Engineering Backbone: 72 Cargo.toml manifests across 7 parent projects, three confirmed encoded-physics crates totalling 25,273 LOC (plus a separate BIND-059 closed-form math exemplar).
  • Waves R21+R22+R23 — the router/cognition substrate. R23 Phase 1 is live; R21 (Core Router Memory) and R22 (Context Injection Routing) are designed with sub-waves gated.
  • Wave Scope IX MLM Phase 1 — Manager persona with six explicit decision-classes and three new BIND-MLM rules; nine-reviewer cross-model convergence.
  • FDRP run #1033 — the recursive fdrp-on-fdrp refresh that produced this addendum and the accompanying website batches (in progress at snapshot time).

Plus: the tick-work-meter was operationalised — a per-tick volume and relevance classifier that grades agent output as the work happens, rather than after the fact.

Predates v20.0 but surfaced for completeness: the antimatter-building Round 8 closeout (FDRP run #1017, 2026-03-21) converged 200 specialist perspectives across 20+ domains and is the strongest empirical anchor for the expert-persistence pattern. It lands before v20.0 and is therefore in-scope for the v21.0 paper itself, not this addendum — flagged here because it was missing from the timeline summary at v21.0 cut.

R20 — Computational Engineering Backbone

R20 is the most material structural landing since v20.0. The substrate now hosts 72 Cargo.toml manifests across 7 parent projects, with three confirmed encoded-physics crates in production use:

  • interceptor-gen (2,973 LOC) — encoded-physics intercept geometry
  • airguard-dsp (~12,500 LOC) — deterministic acoustic DSP
  • flexibil rubber-rec (~9,800 LOC) — rubber-recipe deterministic kernel

Combined, these three encoded-physics crates account for 25,273 lines of Rust — compiled, tested, and audited along the path that produced them. A fourth substrate, lih roi_compute_v2 (459 LOC), is listed as a separate BIND-059 software-audited calculation exemplar — closed-form math rather than parametric physics, so it is reported alongside but not folded into the encoded-physics total.

The framing is deliberate: encoded physics is not generative AI. These are deterministic engineering kernels with reproducible audit trails, used as the grounded substrate that LLM-driven planning sits on top of. The CAR architecture (Rust engine, LLM driver) makes the boundary explicit.

Deep-dive: R20 Computational Engineering subsystem →
Methodology summary: Computational Engineering Backbone →

R21 + R22 + R23 — Router/cognition substrate

The PIO-as-router doctrine (BIND-052) needs a substrate to land on. Three coordinated waves provide it:

  • R23 Plan-Mode Architecture — Phase 1 LIVE (2026-05-10). Three new tables shipped: plan_registry, relevance_scores, and intersection_proposals (fdrp_evolution_log entries 735, 736, 737). These let the router track plans, score per-tick relevance, and surface cross-plan intersection proposals.
  • R21 Core Router Memory — designed. Sub-waves gated on R23 stabilisation. The brief: working memory the router can write to and read from across ticks without reloading the world.
  • R22 Context Injection Routing — designed. Sub-waves gated. Decides what context gets pushed into which specialist agent on dispatch, sized to the agent's context budget.

The tick-work-meter — volume axis (DEAD / THIN / NORMAL / DENSE) crossed with relevance axis (DEAD / VOLUME_INFLATION / LOW_LIFT / ALIGNED / HIGH_LIFT) — is the feedback loop that closes the router substrate, grading work as it happens so the router can re-plan mid-flight.

Methodology: Plan-mode architecture →
Methodology: Context routing →
Methodology: Core router memory →
Methodology: Tick-work-meter →

Wave Scope IX — MLM Phase 1

Manager-Like Mode (MLM) Phase 1 codifies a six-class decision hierarchy with explicit pre-emption order:

  1. ANDON (safety / stop-the-line)
  2. CAPACITY (compute and bandwidth bounds)
  3. CLIENT (paying-work obligations)
  4. HARVEST (in-flight runs needing finish)
  5. STRATEGIC (programme moves)
  6. COUNSEL (advisory; disabled by default)

Schema impact was deliberately minimal: three additive nullable ALTERs and one new view, no net-new tables. That choice corrects schema-duplication recurrence #5 — the recurring pattern where new manager logic spawned parallel tables instead of extending existing ones.

Convergence: nine reviewers across two model families (five PEAR + four GPT-5.5 PEAR) produced 28 cumulative must-fixes, all closed before publication. Three new binding rules came out of the exercise: BIND-MLM-001 (signal flow, faster → slower), BIND-MLM-002 (policy flow, slower → faster), BIND-MLM-003 (manager persona).

R10 cluster — Kaggle pipeline operations

R10 made Kaggle a first-class operational lane rather than a best-effort sidecar. Six coordinated subsystems landed:

  • Kaggle-First Compute Policy — default scheduler routes GPU work to Kaggle before consuming local Node 3 budget.
  • Pipeline Watchdog — auto-dispatches on pending signals; no run sits idle waiting for human dispatch.
  • Sub-Claude Resurrection daemon — restarts collapsed sub-sessions with state replay.
  • Cross-Comp Routing — detects cross-competition portable signal and re-routes.
  • Pre-Stage cron — tomorrow's Kaggle work is staged tonight.
  • Cross-Model Strategy cron — daily Opus + GPT-5.5 + Gemini 3.1 strategy convergence.

The cluster operationalises the autonomy charter directives (R10.1 Kaggle GPU primary, R10.2/R10.5 never-stop pipeline with D-1/D0/D+1 overlap).

Operational substrate — live snapshot

Counts below are bound to a live MySQL snapshot taken at . Source-of-truth JSON: /paper/data/current-state-snapshot.json.

FDRP runs
76
Roadmap items (total)
96
Roadmap queued
44
Roadmap in-flight
9
Roadmap shipped
38
Roadmap held
2
Roadmap blocked
2
Roadmap deferred
1
Base tables
730
Views
338
Expert registry
446
System rules
86
Knowledge index
1,110
fdrp_evolution_log
752

Run #1033 alone added 15 evolution-log entries through this fdrp-on-fdrp refresh (max evo_log_id 752 at snapshot time). Live state, with current Andon, capacity, and convergence telemetry, is at /dashboard/.

Beyond v21.0

A future paper revision will codify R20 in production (drone-eval delivery), the R21+R22+R23 substrate-to-flow integration once R21 and R22 sub-waves complete, and the cumulative MLM founder-bandwidth model now that BIND-MLM-001 through 003 are operational. None of that is plan or promise yet — v21.0 is the current paper, and this addendum is its current-state companion. The next paper will land when the substrate is ready, on its own merits.