Eric Schmidt Warns Against a ‘Manhattan Project’ for AGI

On Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks published a policy paper urging the U.S. to rethink its approach to artificial general intelligence (AGI). They argue that aggressively pursuing superhuman AI—similar to the Manhattan Project that developed the atomic bomb—could have dangerous consequences.

The paper, titled Superintelligence Strategy, warns that if the U.S. tries to dominate AGI development, it could trigger a fierce response from China. The authors suggest that China might see such a move as a threat and retaliate with cyberattacks, further escalating global tensions.

Eric Schmidt Warns Against a ‘Manhattan Project’ for AGI


“A Manhattan Project for AGI assumes that other nations will simply accept being left behind or even risk catastrophic conflict rather than acting to stop it,” the co-authors write. “What starts as a race for control could quickly turn into an arms race, increasing the risk of war rather than preventing it.”

This argument comes at a time when U.S. policymakers are pushing for massive investment in AGI. A few months ago, a congressional commission proposed a Manhattan Project-style initiative to fund AGI research. U.S. Secretary of Energy Chris Wright recently echoed this sentiment, calling America’s AI efforts the beginning of a new Manhattan Project while standing beside OpenAI co-founder Greg Brockman.

However, Schmidt, Wang, and Hendrycks challenge this idea. They believe that rushing to control AGI could lead to a situation similar to the Cold War’s nuclear standoff. Just as no country can safely claim sole ownership of nuclear weapons without provoking a preemptive strike, they argue that the U.S. should avoid trying to dominate AI at all costs.

It might sound dramatic to compare AI to nuclear weapons, but world leaders already view AI as a major military asset. The Pentagon has even admitted that AI is speeding up military decision-making, including in warfare.

To prevent an AI arms race, the authors introduce a concept called Mutual Assured AI Malfunction (MAIM). This strategy suggests that instead of allowing nations to weaponize AGI, governments should proactively disrupt dangerous AI projects before they become a threat.

Rather than focusing on "winning" the race to AGI, Schmidt and his co-authors argue that the U.S. should prioritize defensive strategies. They propose expanding America’s cyber capabilities to disable hostile AI projects, restricting adversaries' access to advanced AI chips, and limiting open-source AI models that could be exploited.

The debate over AI policy has largely been divided into two camps:

  • The doomers,” who believe AI will inevitably lead to disaster and push for slowing down development.
  • The ostriches,” who believe AI progress should accelerate, trusting that things will work out.

Schmidt and his team propose a middle ground: a carefully measured approach that focuses on security rather than dominance.

This position is particularly striking coming from Schmidt, who has previously advocated for aggressive competition with China in AI development. Not long ago, he wrote an op-ed declaring that China’s progress in AI, particularly with projects like DeepSeek, marked a turning point in the AI race.

Despite the Trump administration's clear push for AI supremacy, Schmidt and his co-authors remind policymakers that AGI isn’t just America’s concern—it’s a global issue.

As the world watches the U.S. push the boundaries of AI, the authors suggest it might be wiser to step back and focus on defense rather than escalation.