Positioning
Two questions, one instrument
Question one: how do humans calibrate autonomy when coaching AI agents?
The applied literature on human-automation interaction is substantial — Cummings on supervisory control, Sarter and Woods on automation surprises, Hancock on trust calibration, Shneiderman on human-centered AI, Handa and others on reliance in language-model interaction. What this literature does not contain: large-N, sustained, ecologically valid measurement of autonomy calibration in human-AI agent coaching under controlled variation of the autonomy mix itself. Existing work is short-task lab studies, single-deployment observational data, or theoretical. ACCL holds the autonomy parameter α as a controlled experimental variable while the human-agent coaching relationship runs over hundreds of decision ticks per session, across weeks of voluntary engagement, in a cohort large enough to detect medium effect sizes.
Question two: does sustained AI assistance erode the engagement that makes coaching worth doing?
This question has no empirical answer at any scale. The closest relevant literatures point at it without addressing it directly: self-determination theory (Deci & Ryan) on competence as a basic psychological need that requires calibrated challenge; flow theory (Csikszentmihalyi) on engagement requiring challenge calibrated to skill; Calhoun's behavioral-sink work as a dramatic but methodologically limited touchstone for what happens when challenge is removed; deaths-of-despair research as a partial purposelessness-in-post-industrial-communities story; UBI experiments as limited evidence on engagement when economic challenge is reduced. None of these measure what ACCL is positioned to measure: whether α calibration in human-AI coaching, sustained over time, preserves or erodes the engagement and sense-of-contribution substrate that makes the coaching worth doing.
This is the load-bearing question for every AI deployment thesis that depends on human oversight remaining meaningful. If humans disengage past a certain α threshold — or disengage gradually under sustained high-α conditions even when they do not notice it themselves — then human-in-the-loop as a safety mechanism has a structural ceiling that no model improvement can lift. ACCL is the first instrument designed to measure where that ceiling is.
Why this instrument can measure both
Engagement at scale produces data quality lab conditions and paid-compliance cohorts cannot deliver. Foldit produced peer-reviewed protein structure work that algorithmic methods missed. EyeWire mapped neurons automated tools could not. Galaxy Zoo generated astronomical datasets professional researchers could not produce. The pattern is consistent: gamified participation with intrinsic stakes produces data density and quality that surveys, lab tasks, and paid cohorts structurally cannot reach. ACCL operationalizes this pattern for HITL measurement specifically — and because the engagement is itself a measurement target, the instrument's own engagement behavior becomes the data that answers question two.
The Instrument
A game that is also a measurement device
ACCL is structured as a competitive game in which human players coach AI agents through timed rounds. The game format is modeled on sabong — a Filipino cultural tradition in which a human invests in and coaches a semi-autonomous competitor toward competitive outcomes. In ACCL, the competitor is an AI agent, the coaching is digital, and nothing physical competes. The cultural structure is preserved because it produces engagement; the underlying practice is not.
The core variable is human-in-the-loop intensity. At each decision tick, the AI agent selects an action based on a weighted combination of its own policy and real-time input from its human coach. The autonomy parameter α sets the mix:
where c is the coaching input vector provided by the human at tick frequency, α ∈ [0,1] is the autonomy parameter controlling the autonomy/oversight mix, and ε is an exploration term whose distribution is held constant across all conditions.
ACCL runs three leagues at three α values. The same human coaches the same AI across all three leagues — with counterbalancing, order randomization, and washout protocols to separate α effects from learning transfer and fatigue. This is the cleanest empirical handle on the human-AI autonomy calibration problem currently constructible outside a lab.
The agent relies primarily on its own policy. Human input carries 20% weight. Measures baseline agent behavior and human response to low-autonomy conditions.
A near-midpoint split, asymmetrically placed below 0.5. Serves as the primary calibration band for detecting coaching efficiency and mental model formation.
Human coaching drives more than half the action selection. Asymmetric spacing above midpoint reveals non-linear behavior in high-oversight conditions.
Research Output
Five measurements the current literature lacks
Each measurement is designed to produce data that survey research, lab tasks, and paid-compliance cohorts structurally cannot generate. The game format is not ornamental — it is what makes these measurements possible.
-
01
Performance under adversarial αHow does collaboration quality degrade when α is miscalibrated for a given human-AI pair? Is there an optimal α per pair, and does it shift with coaching experience? This measurement maps the performance surface over the α × experience space — producing the first empirical degradation curves for autonomy miscalibration in human-AI teams.
-
02
Mental model convergenceAs humans gain experience coaching specific AI agents, does their coaching efficiency improve? How fast, and what does the improvement curve look like across α conditions? This is the learning-curve measurement: how quickly humans build accurate internal models of agent behavior, and whether that model generalizes across autonomy regimes.
-
03
Coaching style emergenceDo distinct human coaching strategies emerge across the population? Are they stable across different AI agents and different α values, or do they adapt? This measurement treats the coaching input vector c as a signal — clustering it across participants and sessions to identify natural strategy classes and measure their stability.
-
04
Revealed preferences through market behaviorPlayers buy, sell, and price AI agents in an in-game marketplace. Does the same agent command different prices across the three α leagues? Marketplace pricing is direct economic evidence of how humans value AI capability under different oversight regimes. This is the measurement that survey research cannot produce: actual revealed preferences, not stated ones, under real stakes.
-
05
Engagement and purpose stability under sustained α variationAs α shifts across leagues and across sessions within leagues, does player engagement remain stable, or does it degrade in patterned ways? Do players report reduced sense of contribution at high α, where the agent is doing most of the work, or at low α, where the agent is barely capable? Does engagement degradation lead, lag, or run parallel to performance degradation? This is the measurement that addresses question two directly. It draws on adapted self-determination theory scales (autonomy, competence, relatedness) administered across sessions, behavioral engagement signals (session length, return frequency, voluntary play beyond minimum, marketplace activity around the player's own agents), and within-subject comparison across α leagues. The same human coaching the same agent across three α conditions produces the cleanest available estimate of where the coaching-engagement relationship breaks down — and whether the breakdown is recoverable, persistent, or compounds over time.
Empirical Hypotheses
What ACCL predicts
Each of the five measurements has an empirical hypothesis attached. The instrument is designed so any of the hypotheses can be falsified by the data; this section names them so the falsifiability surface is explicit.
On autonomy calibration
ACCL predicts that performance under controlled α variation produces a non-monotonic curve: performance is poor at α near 0 (agent under-coached, baseline noisy), improves through a calibration band roughly between 0.3 and 0.5, and degrades at high α (human input introduces noise the agent's policy would have avoided). The optimal α is hypothesized to shift downward with coaching experience, as humans learn when their input adds value and when it does not.
On mental model convergence
ACCL predicts that mental model accuracy improves with experience, but the rate of improvement depends on α: high-α conditions accelerate model formation (the human is forced to engage with agent behavior more closely) and low-α conditions slow it (the human has fewer opportunities to test their predictions against agent decisions).
On coaching style emergence
ACCL predicts that distinct coaching styles cluster into a small number of stable strategies (likely three to five) and that stylistic differences are stable across α conditions but produce different performance profiles. Specifically, "interventionist" styles are predicted to underperform at low α and outperform at high α; "delegating" styles are predicted to do the opposite.
On marketplace revealed preferences
ACCL predicts that the same agent commands measurably different prices across the three α leagues, with the highest valuations in the calibration band where coaching efficiency is highest. Marketplace pricing under low-α conditions is predicted to track agent capability directly; pricing under high-α conditions is predicted to track perceived coachability.
On engagement and purpose stability
ACCL predicts that engagement remains stable under α variation in the short term, but degrades over sustained sessions specifically at the high end of the α range — not because high α is intrinsically disengaging but because sustained heavy coaching produces fatigue without proportional sense-of-contribution gain. The prediction that matters most for the broader thesis: at moderate α (the calibration band), engagement is hypothesized to remain stable or improve over time, supporting the proposition that calibrated human-AI coaching is sustainable work.
Participant Population
Why the Philippines, why BPO, why now
The macroeconomic stakes are sovereign-level, not sectoral.
The Philippine BPO sector and overseas worker remittances together account for roughly 18% of GDP and underwrite the consumption economy that supports much of the rest. The country hosts the world's largest concentration of the workforce most directly exposed to AI substitution in knowledge work. AI's effect on BPO employment is not a sectoral question for the Philippines — it is a question of macroeconomic continuity. Philippine fiscal policy through 2035 depends materially on whether the BPO transition produces managed adaptation or rapid displacement. No comparable research site exists for empirical work on AI's labor-economic effects: the exposure is concentrated, the workforce is measurable, the institutional research infrastructure is in place, and the policy stakes are immediate rather than abstract.
The Filipino BPO population is the population the second question is about.
Question two — whether sustained AI assistance erodes the engagement that makes human coaching worth doing — is not an abstract question for this cohort. It is the operational question their next decade of work will answer either way. 1.5 million Filipino BPO workers are currently transitioning from executing tasks to managing agents that execute tasks. Whether that transition produces meaningful work or managed disengagement is the load-bearing question for Philippine economic policy through 2035. ACCL measures this question in the population it is about, using a format that population already understands, before the answer is locked in by deployment defaults nobody has measured.
Sabong is the culturally native format for this population
Any gamified research instrument has to choose a game. Choosing one with existing cultural continuity in the target population lowers the engagement threshold to near zero. Filipino participants do not need to learn what the game is — they only need to learn the platform. This is not a claim about cognitive transfer from the cultural format to LLM coaching. It is a simpler claim about engagement quality.
The Philippines has a specific economic stake in the skill the game measures
BPO work is transitioning from executing tasks to managing agents that execute tasks. The skill that transition requires is exactly what α measures: how much to let the agent decide, how much to intervene, when to override. A research instrument that studies this skill in a Filipino population, using a format that population already understands, is measuring a real future — not an abstract one.
Technical Architecture
A layered substrate that attests its own observations
ACCL is built on a layered framework modeled on the OSI networking reference design. Each layer provides defined services to the layer above through clean interfaces. The architecture exists for a specific reason: empirical research on AI systems currently relies on researcher reputation and journal review for trust. As the deployments studied become higher-stakes, that substrate is insufficient.
Cryptographic attestation at source and deterministic replay shift the verification burden from reputation to mechanism. ACCL's research outputs are verifiable by construction, not by attribution.
This framework is one instance of a broader layered architecture for trust infrastructure that operates across physical infrastructure, financial instruments, carbon markets, and regulated services. ACCL is uniquely positioned as the only application that exercises the full framework in a single operational context at sufficient data density to expose inter-layer interaction failures — under conditions where exposing them has no physical cost.
Two US provisional patents filed April 2026 (patent agent: Steve Shattil) document the architectural substrate at a formal level.
Study Design
Methodology and fellowship deliverables
Six-month research timeline
Researcher Background
What I bring and what I'm looking for
Control systems engineering, applied to safety-critical physical environments. Core formation at Edwards Air Force Base (advanced tracking systems), Groom Lake (classified data acquisition), and NASA Ames (Final Approach Spacing Tool — a neural network deployment into live terminal-area air traffic control). The problem across all of those environments was the same: how do you bound the behavior of a system that learns from data, in an operational context where being wrong has physical consequences?
Co-founder of a two-person company that received NASA SBIR Phase 2 and DOD/DARPA SBIR Phase 2 awards in 1999, with full engineering and operational responsibility for reducing inventive mathematics to tested hardware.
The past decade has been self-funded development of the architectural thesis this work instantiates: that AI safety for physical-world systems is better approached as a control engineering problem than as a preference-learning problem, and that the layered-architecture discipline control engineering developed over seventy years transfers to AI systems when those systems are properly instrumented. Two US provisional patents filed April 2026 capture that substrate. ACCL is one application of it.
What I am looking for: institutional partners willing to support empirical research at the scale this question requires. The Anthropic Economic Futures program's combination of research grants, longitudinal data infrastructure, and policy symposia is the closest existing match for the institutional support this work needs. The exchange is bidirectional. ACCL produces empirical evidence on questions the program's published framing identifies as central — work, meaning, and what the AI-enabled economy requires of human contribution — and the data infrastructure pillar gains a longitudinal research site in the country where those questions are most economically consequential. Parallel institutional anchoring through the Philippine Department of Science and Technology's Balik Scientist Program (under the Council for Industry, Energy, and Emerging Technology Research and Development) provides the domestic counterpart structure that makes sustained operation possible.
Research alignment
ACCL extends and complements an existing body of Anthropic-affiliated empirical work on human-AI interaction and the economic effects of AI deployment.
Anthropic Economic Index. ACCL's longitudinal data on autonomy calibration, engagement stability, and revealed-preference dynamics in a Filipino BPO-adjacent cohort feeds directly into the Index's geographic-and-enterprise reporting. Philippine BPO is the highest-signal site for this measurement currently identifiable; no other comparable economy combines the workforce concentration, English fluency, institutional research infrastructure, and macroeconomic exposure necessary for the Index's longitudinal questions to land sharply.
Skill formation under AI assistance (Tamkin, Shen, and collaborators). The learning-curve findings the α parameter will produce extend the existing skill formation literature into the human-AI agent coaching context, where the cognitive task is qualitatively different from coding-with-AI assistance.
Reliance and trust calibration in language model interaction. ACCL's mental model convergence and coaching style measurements operationalize the reliance question in a sustained-engagement setting, producing data on how trust calibration evolves over hundreds of decision ticks rather than across short laboratory tasks.
Societal impacts of AI deployment in labor markets. The BPO transition is not a generic labor-market question. It is the most empirically tractable instance of the broader question the program's framing names directly: what new capabilities emerge when humans work alongside AI systems, and which human skills remain valuable as AI advances.
Continuity beyond the grant cycle
ACCL continues after the initial six-month grant cycle through institutional anchors at De La Salle University, Ateneo de Manila University, and University of the Philippines Diliman, supplemented by Balik Scientist Program designation under DOST-PCIEERD and parallel funding pursued through NSF Future of Work at the Human-Technology Frontier and other multi-year vehicles. Six months is sufficient to complete the MVP, run the initial cohort, produce a first paper on the α findings, release the instrument under permissive licensing for replication, and establish the longitudinal data infrastructure for sustained operation. The substantive research program — understanding how human-AI collaboration configurations reshape value creation in Philippine BPO and the labor markets that depend on it — operates on a multi-year horizon. The first grant cycle establishes the foundation; the work it enables is what matters.
References available on request: Steve Shattil (NASA SBIR co-executor and patent agent for the filed provisional patents). Additional references from Philippine academic host institutions (DLSU, Ateneo, UP) and DOST-PCIEERD available once host institution arrangements are formalized.