“What Will Be Left For Humans To Do?”: OpenAI Engineer Admits Feeling an ‘Existential Threat’ From AI
In a candid and emotional post on X (formerly Twitter), OpenAI engineer Hieu Pham shared something unexpected — fear. Not fear of failure, but fear of the very technology he helps build.
Hieu Pham, who works as a Member of Technical Staff at OpenAI and has previously worked at xAI, Augment Code, and Google Brain, admitted that the rapid rise of artificial intelligence has started to feel deeply unsettling to him.
"Today, I finally feel the existential threat that AI is posing,” he wrote honestly. His words weren’t dramatic or exaggerated — they felt personal. “When AI becomes overly good and disrupts everything, what will be left for humans to do? And it’s when, not if.”
His message reflects a growing discomfort, even among top AI researchers. These are the people building the world’s most powerful systems — tools that can now write, code, research, reason, and solve complex problems faster than ever before. But with each breakthrough comes a quiet question that’s becoming harder to ignore: If machines can do almost everything, where does that leave us?
Across companies like OpenAI, Google, xAI, and Anthropic, the race to create smarter and more capable AI models is accelerating. At the same time, concerns about job losses, social disruption, and losing control over these systems are becoming more intense. For many in the tech world, excitement and anxiety now exist side by side.
Anthropic Safety Lead’s Exit Sparks Debate
Pham’s comments came shortly after another powerful voice raised similar concerns.
Mrinank Sharma, who led Anthropic’s Safeguards Research Team, recently resigned on February 9. He had joined the company in 2023 to focus on AI safety — making sure advanced systems remain aligned with human values.
In a public letter explaining his decision, Sharma didn’t hold back. “The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
His words carried urgency and frustration. He warned that humanity may be approaching a critical turning point — a moment where our wisdom needs to grow just as quickly as our technological power. If it doesn’t, he suggested, the consequences could be severe.
Sharma also hinted at a difficult reality inside AI companies. While many publicly commit to safety and responsible development, internal pressures — competition, funding, and corporate goals — can make it challenging to always put ethics first. It’s a tension that few openly talk about.
Warnings From the “Godfather of AI”
These concerns are not new. Geoffrey Hinton, often called the “godfather of AI” for his groundbreaking work in deep learning, has repeatedly expressed worry about the speed of AI development.
Hinton has warned that if AI systems surpass human intelligence without sharing human goals, they could become impossible to control. Once that happens, he has said, “the idea that you could just turn it off won’t work.”
He has even expressed regret about how quickly AI capabilities are advancing — a striking admission from someone who helped make this technology possible.
