Every plan for managing AI assumes that governments will be functional enough to do the managing. Alignment research assumes regulators will implement findings. Compute governance assumes agencies with technical capacity. International coordination assumes stable domestic politics. Benefit distribution assumes working tax systems and safety nets.
I've been trying to figure out how much weight that assumption can actually bear, and I think the answer is: less than anyone in the AI safety space is accounting for. The problem isn't just that government is slow. It's that AI may be structurally dismantling the foundations that government runs on.
Start with the most concrete thing. Modern states fund themselves by taxing human labor. Income tax, payroll tax, Social Security contributions, Medicare withholdings. The entire fiscal architecture of government is built on the assumption that humans work and earn wages.
AI replaces human labor. That's the whole point of it. Every AI-automated task is a task that used to generate income tax, payroll tax, and consumption spending that generated sales tax. As AI adoption scales, the tax base contracts. Not by a little. The Citrini fiscal scenario models federal receipts falling 12% below baseline, with labor's share of GDP dropping from 56% to 46%.
And the spending side goes the opposite direction. Displaced workers need safety nets, retraining, healthcare, unemployment insurance. Government needs more money to manage the transition at the exact moment it has less money coming in. This isn't a policy choice anyone makes. It's a structural consequence of the technology working as intended.
The fiscal problem is worth dwelling on because it doesn't require any assumptions about politics, trust, or institutional failure. If AI displaces labor at scale, government revenue falls mechanically. No one has to make a mistake for this to happen. No one has to lose an election.
And it gets worse during recessions. Jaimovich and Siu published in the Review of Economics and Statistics that 92% of automation-driven job losses concentrate in the first 12 months of recessions. Companies that were gradually adopting AI suddenly accelerate under economic pressure. Those jobs don't come back when the economy recovers. Gita Gopinath, the IMF's First Deputy Managing Director, described AI as a "crisis amplifier" that could "convert an otherwise ordinary downturn into a much deeper economic crisis." The next recession won't just cut government revenue through the normal cyclical channel. It'll permanently eliminate a chunk of the tax base.
Even if the money problem were solved, government faces structural barriers that money alone can't fix.
AI is general-purpose. Government is not. The FDA regulates drugs. The SEC regulates securities. The FTC regulates competition. These agencies exist because the things they regulate are specific and bounded. AI is in healthcare, finance, law, defense, education, transportation, and media simultaneously. No single agency owns it. Every agency is affected by it. The regulatory structure is built for a world where technologies fit into categories, and AI doesn't.
The information asymmetry is total. OpenAI, Google, Anthropic, and Meta know more about what their systems can do than any government body will ever know. They have the researchers, the compute, the internal benchmarks, the deployment data. Government regulators are asking companies to self-report on capabilities that the regulators lack the tools to independently verify. And the companies can use AI itself to lobby more effectively, draft regulatory comments, shape public narrative, and optimize their engagement with the political process faster than regulators can read the submissions.
Democratic timescales and AI timescales are incompatible. This isn't just "government is slow." Democratic legitimacy requires deliberation. Deliberation requires time. Legislators need to understand what they're voting on. Public comment periods exist for a reason. Oversight hearings take months to organize. None of this is dysfunction; it's how democratic governance is supposed to work. But AI capabilities change on a cycle of weeks to months. By the time a regulatory framework is debated, drafted, commented on, revised, passed, and implemented, the technology has moved on. The EU AI Act took three years. GPT-3 to GPT-4 took two.
Government can't adopt AI to close the gap. Private companies adopt new AI tools the week they're released. Government has procurement cycles, security requirements, legacy system integration, union considerations, and public accountability constraints that make rapid adoption nearly impossible. The result is that the capability gap between the regulator and the regulated grows over time. Government gets relatively less competent as the private sector gets relatively more capable. This is a gap that widens by default.
Everything above describes what's happening right now, with narrow AI. The structural erosion is already underway. Revenue pressure, speed mismatches, information gaps, adoption asymmetry. These are current problems.
But the AI safety community isn't primarily worried about narrow AI. They're worried about transformative AI. Systems that can do most or all cognitive work that humans do. And this is where the governance capacity problem goes from concerning to something I don't have a good word for.
If TAI arrives while governance is already weakened by everything described above, you have the most consequential technology in human history entering a world where the institutions meant to oversee it have been progressively hollowed out by its precursors. Companies and individuals with access to superintelligent systems. Government agencies that can't update their procurement software.
Every structural disadvantage gets multiplied. The information asymmetry goes from bad to permanent: you cannot regulate a system that is cognitively superior to every person in your agency. The speed mismatch becomes infinite: a superintelligent system operates on timescales that make legislative deliberation irrelevant. The adoption gap becomes a capability gap of a kind we haven't seen before, where private actors can do things that governments can't even understand, let alone oversee.
The revenue problem is still there underneath all of this. If TAI displaces most human cognitive labor, the tax base doesn't just shrink. It functionally disappears in its current form. And restructuring taxation (to tax capital, compute, or AI output instead of labor) requires exactly the kind of governance capacity that's been degrading.
Every major intervention for managing TAI depends on governance capacity that is currently being eroded by the AI that precedes it.
Kulveit, Douglas, and colleagues published a position paper at ICML 2025 arguing that humanity faces existential risk from what they call gradual disempowerment. The idea: you don't need a rogue superintelligence scenario. Incremental AI improvements progressively erode human influence over critical systems through ordinary competitive displacement. Each step is individually rational. Organizations that adopt AI outcompete those that don't. Human oversight roles shrink to rubber-stamping. The process is self-reinforcing, and nobody is in a position to stop it because each individual decision makes local sense.
80,000 Hours treats this as a distinct problem profile alongside misalignment. What makes it hard is that it requires no adversarial AI behavior. No scheming. No deception. Just economics.
Governance capacity degradation is one of the central mechanisms through which gradual disempowerment happens. The sequence is: AI outcompetes human labor, government revenue falls, institutional capacity degrades, the weakened institutions can't coordinate a response to AI deployment, so AI deployment accelerates without meaningful oversight, which displaces more human labor, which further degrades government revenue. At each turn, the humans nominally in charge have less ability to change the trajectory. By the time TAI arrives, the institutions that could theoretically provide oversight may be too degraded to function in that role.
Some people are adjacent to this problem. Kulveit's paper exists. 80,000 Hours has the problem profile. There's a PNAS study from January 2026 showing that people who perceive AI as labor-replacing become less satisfied with democracy and less politically engaged, even before actual displacement. Gopinath gave the crisis amplifier speech. Brookings has done work linking automation exposure to populist support.
But as far as I can tell, nobody is working on the specific question of governance capacity as a binding constraint on AI risk management.
The existing governance monitoring frameworks (Fragile States Index, V-Dem, OECD fragility indicators) measure levels, not velocity. They'd tell you a country is fragile. They wouldn't tell you it's becoming fragile fast, or why. None of them track technology-governance interactions, revenue-base erosion from automation, or the widening capability gap between public and private sectors.
I think the research question that matters most is: under what conditions does AI-driven disruption degrade governance capacity enough to prevent coordinated AI oversight, and what interventions could preserve that capacity before the window closes?
That question connects public finance, political science, AI governance, and labor economics in ways I haven't seen combined. The historical evidence suggests it matters. Post-Soviet Russia's transition failed not because the reforms were wrong, but because the institutions needed to implement any reforms had disintegrated. State capacity was the binding constraint. The countries that managed technological transitions well (the Nordics during deindustrialization, for instance) were the ones with institutional capacity and fiscal resilience already in place before the shock arrived.
There's a narrow version of good news in the research. Brookings found that strong social safety nets eliminate the link between automation exposure and populist support. Countries with high institutional trust, universal welfare, and competitive public sector compensation are more resilient. Recovery from institutional decay can also be nonlinear: transparency above a critical threshold can trigger accelerating reform.
But all of those interventions require governance capacity to implement. If the capacity degrades past some point, the institutions can't implement the things that would save them. That's the trap, and I don't think anyone is modeling where that point is.
The AI safety community is doing important work on alignment, interpretability, and capability monitoring. But all of that work implicitly assumes that when results are ready, there will be a functioning government capable of acting on them. That assumption deserves its own research program, because right now nobody is checking whether it holds.