A paper submitted to arXiv on March 29, 2026, by Ari Ercole applies game-theoretic reasoning to a core question about healthcare AI deployment: can these systems actually change how healthcare operates at a systemic level? The paper’s answer is largely no — unless the AI also restructures the incentives governing clinical and administrative behaviour. The full paper, “Incentives, Equilibria, and the Limits of Healthcare AI: A Game-Theoretic Perspective,” is available on arXiv.
- Most healthcare AI targets task efficiency or monitoring but leaves the incentive structures driving system behaviour untouched.
- Ercole proposes three AI categories: effort-reduction AI, observability-increasing AI, and mechanism-level incentive-change AI.
- A stylised inpatient capacity signalling model finds that only the third category can shift a healthcare system out of a stable but inefficient equilibrium.
- The analysis has direct implications for healthcare executives and procurement teams evaluating what outcomes are realistically achievable from AI vendors.
What Happened
Ari Ercole submitted the paper to arXiv on March 29, 2026, using minimal game-theoretic reasoning to evaluate whether AI deployments can produce durable changes in healthcare system behaviour. The paper argues that AI sold on capacity and productivity grounds may not deliver system-level outcomes if the incentive structures producing current behaviour remain in place. Ercole frames the work as a conceptual framework for healthcare decision-makers rather than an empirical study of specific deployments.
Why It Matters
AI is widely positioned as a technological response to chronic capacity and productivity pressures in healthcare. But deployment carries significant costs, including the ongoing expense of monitoring AI systems in production, and whether the optimism around AI as a near-complete solution is well-founded is, as Ercole puts it, “unclear.” The paper directly addresses the gap between vendor claims of productivity gains and the structural conditions required for those gains to translate into system-level change. Prior research on principal-agent problems in healthcare has documented how misaligned incentives distort clinical behaviour; Ercole’s contribution is to apply that reasoning explicitly to AI intervention design.
Technical Details
The paper proposes three archetypal AI categories. Effort-reduction AI lowers the cost of performing existing tasks — for example, automating documentation or administrative workflows. Observability-increasing AI makes previously hidden actions or outcomes visible to principals such as administrators or payers. Mechanism-level incentive-change AI directly alters the payoff structure of the system, for instance by changing how performance is measured or how risk is distributed between parties.
Ercole tests these categories against a stylised inpatient capacity signalling example: a model in which hospitals signal capacity to allocate resources under conditions of asymmetric information. In this setting, game-theoretic analysis finds that effort-reduction and observability AI leave the equilibrium outcome unchanged, because agents adapt their strategic behaviour to preserve existing payoffs. The paper states that “task optimisation alone is unlikely to change system outcomes when incentives are unchanged.” Only mechanism-level interventions that redistribute risk can move the system to a different stable equilibrium.
The model is explicitly stylised — Ercole describes the approach as using “minimal game-theoretic reasoning” to isolate structural logic rather than to generate precise quantitative predictions. No empirical data fitting is performed; the analysis is purely theoretical. No aggregate statistics appear in the abstract, and claims should be interpreted accordingly.
Who’s Affected
The analysis is directed primarily at healthcare leadership and procurement teams. AI tools that automate clinical documentation, flag deteriorating patients, or optimise bed scheduling fall into the effort-reduction or observability categories — which the paper finds insufficient for changing system-level outcomes when incentives remain fixed. Payers, hospital administrators, and policymakers who set the incentive structures the model treats as exogenous are identified as the actors whose decisions determine whether mechanism-level change is achievable in practice.
AI vendors selling into healthcare are implicitly affected: the framework gives buyers a basis for questioning productivity claims that do not address incentive design. The paper’s implications extend to any procurement process that evaluates AI tools solely on task-level performance metrics.
What’s Next
Because the model is theoretical and stylised, its categorical distinctions — effort reduction versus observability versus mechanism-level change — have not been validated against empirical data from real healthcare systems. Further work would be needed to test whether real deployments map cleanly onto these categories and whether the equilibrium predictions hold in systems with more complex incentive structures. Ercole identifies the paper’s primary contribution as providing a conceptual lens for healthcare leaders to interrogate vendor claims — specifically whether a proposed AI intervention reshapes risk allocation or merely reduces the cost of existing behaviour without disturbing the underlying equilibrium.