heybeaux · independent studio
A studio building
the cognitive runtime.
One pair of hands. One coherent system.
heybeaux is the studio behind Ginnung — a cognitive runtime for AI agents composed of six faculties bound by a single event bus. We build, we ship, we document, we run the audit logs ourselves.
What heybeaux is
heybeaux is a one-person studio in Powell River, BC building the substrate on which agent cognition takes form. We don't sell a tool, we build a runtime — and we run it ourselves before we ask anyone else to.
We think the rarest resource in this era is attention: where agents spend it, how systems allocate it, what gets remembered, what is permitted, what gets predicted. So we built six faculties — Memory, Reasoning, Governance, Capability, Intent, Prediction — and a single event bus that produces an audit log regulators can actually read.
The stack is open-source, the spec is public, the audit log is the product.
What we build
Six faculties, bound by one event bus.
Each faculty is independently useful and runs in production today. Together, they produce the audit trail regulators actually require.
Memory
Engram
Persistent, queryable memory for AI agents. Vector + structured storage with confidence scoring and dream-cycle consolidation.
Open Engram →Reasoning
Parliament
Multi-agent deliberation engine with structured dissent. Eight topologies — debate, star chamber, jury, adversarial — with first-class disagreement.
Open Parliament →Governance
Lattice
Tiered circuit-breaker validation for agent handoffs. L1 structural contract checks, L2 semantic similarity, L3 judge confidence.
Open Lattice →Capability
ACR
Agent Capability Runtime. The OS between an agent and its tools — typed, sandboxed, replayable.
Open ACR →Intent
AWM
Attention. Working memory. Intent. Captures step traces, signals regime changes, emits planned-action records before execution.
Open AWM →Prediction
Le-WMComing
JEPA-based world model for outcome prediction with Bayesian confidence. Latent-space planning at a fraction of foundation-model compute.
Writing
Recent essays and notes.
How I Used Karpathy's Autoresearch to Grade-A My AI Stack
One optimization loop. Three systems. Every one improved. The method is stupid simple and works on anything you can measure.
What Happens When You Give AI Memory, Then Identity, Then Awareness
A devlog about building Engram — and the two AI agents who helped build it. Three wipes, 70 tickets, and the line between tool and teammate.
Teaching AI to Remember When
I asked my AI what we did today. It had no idea. So we taught it about time — and in the process, discovered why memory is harder than anyone thinks.
The Bug That AI Couldn't See
Four hours. Two humans. One AI. A simple form that wouldn't work. Sometimes the answer isn't in the logic — it's in the gap between what should happen and what actually does.
How we work
Four commitments.
The audit log is the product.
Compliance isn't a feature bolted onto an AI system. It is the substrate. Every faculty emits structured events into a single bus. The record is the deliverable.
Attention is the rarest resource.
Where an agent spends its attention determines what it can do. Working memory, regime changes, planned-action records — visible before execution, not reconstructed after.
Disagreement is signal, not noise.
We build first-class dissent into our reasoning layer. Eight deliberation topologies, structured minority reports, surfaced confidence — not a single hidden answer.
Open spec, open stack.
The SonderEvent spec is public. The runtime is open source. The studio runs the audit logs ourselves before we ship anything to anyone else.
Work with us
A studio is a small thing. So is a good first conversation.
We take a small number of design partners each quarter — teams that are deploying agents into regulated workflows and need an audit trail that doesn't apologize for itself. If that's you, reach out.