A 16,000-word investigation just dropped in The New Yorker, and buried inside is a detail so unhinged it reads like the backstory for a Call of Duty villain: OpenAI reportedly discussed positioning itself as a kind of nuclear weapon that world powers, including China and Russia, would need to compete to invest in, or risk being left behind.
The plan that supposedly should not have existed
Here's the lowdown. According to The New Yorker, after former OpenAI policy adviser Page Hedley presented strategies to prevent a global AI arms race, OpenAI president and major Trump donor Greg Brockman reportedly floated the opposite idea. The concept, which insiders apparently called the "countries plan," involved OpenAI enriching itself by starting a bidding war among world powers for access to its technology.
Jack Clark, who served as OpenAI's policy director at the time and now leads policy at competitor Anthropic, described the mechanics plainly: it was "a prisoner's dilemma, where all of the nations need to give us funding," which "implicitly makes not giving us funding kind of dangerous."
That framing should sound familiar to anyone who has ever played through a Call of Duty campaign. The shadowy private organization with leverage over every superpower simultaneously is practically a genre staple at this point.
danger
OpenAI disputes this characterization entirely, calling it "ridiculous" and saying that at most "ideas were batted around at a high level about what potential frameworks might look like to encourage cooperation between nations."
What former employees actually said
The New Yorker says it reviewed documents from the period when the plan was discussed. According to the reporting, the "countries plan" was not just an idle thought experiment. It was popular with OpenAI executives and was only abandoned after employees began discussing whether they would quit over it.
One junior researcher who was present during a meeting where the plan came up reportedly summarized their reaction in five words: "This is completely fucking insane."
That quote lands differently when you consider the broader context the article builds. The New Yorker's piece is framed around Sam Altman's trustworthiness, citing multiple people who accuse the OpenAI CEO of habitual dishonesty. The countries plan sits alongside other episodes including Altman's 2023 removal and reinstatement at OpenAI, his ongoing feud with Elon Musk, and his recent signing of a deal with the US Department of War, which put significant strain on whatever remained of his public image as a safety-first AI advocate.
The gap between OpenAI's version and everyone else's
The key here is the direct contradiction between OpenAI's official position and what former employees and reviewed documents apparently show. OpenAI's framing, that executives were merely exploring "frameworks for cooperation," sits awkwardly next to Clark's prisoner's dilemma description and the junior researcher's reaction.
Clark left OpenAI and now works at Anthropic, which gives him some distance from the company but also a professional incentive to be critical. The New Yorker's claim that it reviewed contemporaneous documents is harder to dismiss.
What most players miss in stories like this is how the internal culture of a company shapes these moments. The countries plan wasn't apparently some rogue idea from one person. According to the reporting, it had genuine traction among leadership before employees pushed back hard enough to kill it.
Why this matters beyond the obvious
Call of Duty has spent two decades building campaigns around exactly this archetype: the private power broker who plays governments against each other for profit and leverage. The franchise recently shifted away from back-to-back Modern Warfare and Black Ops releases to keep its storytelling fresh, but the villain template remains consistent. The fact that a real company's internal discussions apparently mapped so closely onto that template is the kind of detail that would get cut from a game script for being too on the nose.
The full New Yorker piece runs to 16,000 words and covers far more ground than the countries plan alone. For anyone tracking where the AI industry is heading and what it means for the games that increasingly depend on it, the latest gaming news will keep connecting these dots as they develop. Make sure to check out more:







