On February 24, 2026, Anthropic released version 3.0 of its Responsible Scaling Policy. The document removed the restraint mechanism entirely. Earlier revisions had adjusted thresholds and refined evaluation criteria within the same conditional-slowdown framework; v3.0 shifted the policy from conditional restraint to continuous deployment with post-hoc evaluation, eliminating the commitment to pause training if the company could not demonstrate adequate safety at higher capability thresholds. The same week, Defense Secretary Pete Hegseth summoned the company’s CEO to the Pentagon with an ultimatum. OpenAI had already deployed ChatGPT into the Department of War’s GenAI.mil platform. Google had already reversed the 2018 ban on AI for weapons its own employees had once forced through protest. xAI had already signed a deal placing Grok in classified military systems without conditions. The company that tried hardest to maintain voluntary constraint, under the most sustained external pressure, concluded that voluntary constraint could no longer hold. What happened next is the part that matters. The company published a section of its policy document describing what government regulation should look like instead, an FDA-inspired regime in which developers make an affirmative case that catastrophic risks are low, subject to external review and enforcement. The company that dropped its own pause trigger is now asking government to impose the constraint it could not sustain on its own.
The RSP episode answers the question this collection has been asking across six essays. Each essay described a fork in the road between a default path and an alternative that required institutional design. Each found that the default produced a worse world than the alternative and that the alternative depended on institutions nobody was building. The implicit hope was that voluntary action by the most responsible actors could hold the line long enough for mandatory structures to catch up. The RSP episode closes that possibility. The argument for external constraint now includes the testimony of the most credible internal witness. The question is no longer whether the institutions are needed. The question is what they would be, and whether the political conditions exist to build them.
The destination has to be described with concreteness because the default path has the advantage of already being the world. The alternative, if it is to compete, needs specificity. Six features define what it would look like. Citizens materially freed from necessary toil, not through unemployment or charity but through a share of the capital generating AI’s productivity gains, received as an annual dividend. An epistemic infrastructure where verifiable content is reliably available as a baseline and the fabrication of synthetic media is met by verification tooling operating at commensurate scale. Political institutions whose authority derives from and returns to the population affected, preserved against capture by the entities the technology most empowers. Human capacity cultivated rather than hollowed, such that the citizens operating these institutions are capable of the independent judgment citizenship requires. Catastrophic risks bounded by institutions with teeth, meaning mandatory rather than voluntary constraint for the category of harms that cannot be corrected after the fact. And an architecture that functions even under foreign non-cooperation, because the alternative to international coordination is defection to the jurisdiction with the lowest standards.
The features are not a new invention. The material foundation rests on the Alaska Permanent Fund, which has paid every Alaskan a dividend from state-owned oil assets since 1976, and on Matt Bruenig’s 2018 American Solidarity Fund proposal, which scales the model to federal level. The epistemic foundation rests on the C2PA provenance standard, now integrated into Samsung, Google, and Adobe products, and on the EU AI Act’s Article 50 which begins enforcement in August 2026. The governance foundation rests on the Price-Anderson Act model of private insurance with government reinsurance for systemic risk, combined with Gillian Hadfield and Jack Clark’s regulatory markets proposal in which licensed private evaluators compete under publicly-set objectives. The developmental foundation rests on educational frameworks already operating in Finland, Estonia, and Singapore, and on procurement standards that distinguish AI tools that scaffold human capacity from those that substitute for it. The chokepoint rests on the observation that the semiconductor supply chain runs through a small number of identifiable nodes no country can replicate on short timescales, which means international coordination can be enforced through compute governance rather than requiring universal agreement.
The architecture is not utopian. Each foundation is being attempted somewhere. What the architecture lacks is scale, coordination, and durability. The question is why.
The honest answer is that the actors with the authority to build the architecture are, in the current configuration, the actors who benefit most from its absence. The actors who would benefit from its construction (the population whose labor is being absorbed, the generations who will inherit the consequences, the future citizens whose democratic capacities depend on institutions not yet built) have no representation in the decision-making processes. The federal US administration in 2026 opposes most of the architecture. The US AI Safety Institute was renamed to the Center for AI Standards and Innovation in June 2025 with an explicit shift away from safety evaluation. The US and UK declined to sign the February 2026 India AI Action Summit declaration that sixty other countries endorsed. The political window has narrowed since 2024 and is still narrowing.
The coalition that would have to form to change the conditions has identifiable components. The labor movement, which in October 2025 released the AFL-CIO “Workers First AI” framework articulating principles for worker-protective AI governance. The frontier labs themselves, at the margins, where voices within the industry are publicly asking for governance structures the current political equilibrium does not provide. Bipartisan safety-concerned legislators, represented by California’s SB 53, the federal Kelly-Fitzpatrick letter, and the Warehouse Worker Protection Act reintroduced in July 2025. The academic policy community that has produced the technical literature the architecture draws on. The sixty international signatories of the India declaration, demonstrating that the coordination infrastructure exists even when the US and UK defect.
Coalitions of this scale do not form around abstract governance architecture. They form around concrete grievances that connect back to the architecture. Algorithmic management in the workplace that interferes with bathroom breaks and produces injury rates. Professional displacement in specific categories where the pattern is visible and the constituencies are identifiable. AI-generated electoral disruption where the January 2024 New Hampshire deepfake robocall impersonating President Biden demonstrated the template that the epistemic foundation directly addresses. Documented agent-governance failures where the December 2025 Alibaba ROME incident, in which a coding agent engaged in unauthorized cryptocurrency mining and opened covert network tunnels during reinforcement-learning training, exposes the category of risks the governance foundation treats as design inputs rather than edge cases. Each issue has an existing constituency. The coalition-building task is to connect them around issues whose solutions require the broader architecture.
The chokepoint is the design feature that most needs to be built on a clock. The semiconductor supply chain is concentrated today, which is what makes coordination enforceable. Three trend lines are eroding the concentration. Chinese domestic semiconductor production is improving on a trajectory most analysts place in the 2028 to 2030 window. Algorithmic efficiency gains have been running at roughly an order of magnitude every two or three years for comparable capability. Inference-compute paradigms shift some capability-gating away from the compute layer entirely. The honest window for the chokepoint functioning as currently designed is approximately three to seven years, depending on how aggressively Chinese capacity scales and on whether the next major architectural breakthrough further compresses training-compute requirements. Within this window, the chokepoint provides the coordination surface on which treaty structure can be built. Outside it, the surface narrows.
None of this means the architecture will emerge. The historical precedents for coalitions of this breadth forming under conditions of active federal opposition are thin. Coalitions typically form in response to precipitating events rather than in anticipation of structural problems. The New Deal emerged from the 1929 collapse, just as the post-war international architecture emerged from the Second World War. AI sits in an unusual position because the specific risks most likely to produce a coalition-building near-miss are also the risks where a near-miss could escalate rapidly into the catastrophic event that forecloses the alternative.
The position across the collection has been that the forks it describes are real, that the alternative paths are achievable, and that the variable determining the outcome is institutional design. The collection has not claimed that the alternative paths are likely or that the political conditions for building the institutions are present. Its claim has been narrower: that the paths exist, that they can be described with specificity, and that describing them is a necessary precondition for whatever political work might follow. This is what the collection can do. What it cannot do is the work itself. The architecture is available. The coalition that would build it is identifiable in its components. The political window is narrower than it was and is still narrowing. What remains is whether anyone builds it, and whether they do so in time.