IBM, AWS veteran says 90% of your employees are stuck in first gear with AI, just asking it to ‘write their mean email in a slightly more polite way’

Companies are spending millions on artificial intelligence tools with big hopes of boosting productivity. Leaders imagine faster workdays, smarter decisions, and teams freed from boring tasks. But in reality, most employees are barely scratching the surface of what AI can actually do. Instead of transforming the way they work, many are using AI for the simplest things—almost like a fancy spellcheck or search engine.

Allie K. Miller is the CEO of Open Machine, which advises Fortune 500 companies on strategy and tech adoption.
Allie K. Miller is the CEO of Open Machine, which advises Fortune 500 companies on strategy and tech adoption.

That was the blunt message shared by Allie K. Miller, CEO of Open Machine, during the Fortune Brainstorm AI conference last week in San Francisco. Drawing from years of hands-on experience at IBM and Amazon Web Services (AWS), Miller spoke with both excitement and frustration about what she’s seeing inside major companies.

She explained that AI can be used in four different ways, each more powerful than the last. According to Miller, AI can act as a microtasker, a companion, a delegate, or eventually a true teammate. The problem? Most people never make it past the first step.

At the microtasker level, AI is treated like a smarter Google—something you ask quick questions or simple tasks, then move on. Miller believes this mindset is wasting enormous potential.

“Most people stop there,” she said, almost sighing. “They don’t realize AI can reason, adapt, and collaborate.”

Miller’s biggest concern is how employees interact with large language models (LLMs). Traditional software needed exact instructions to deliver exact results. AI is different—it can think through problems, adjust, and even suggest better approaches. When users treat AI like old-school software, they miss out on what makes it powerful.

“Ninety percent of your employees are stuck in this mode,” Miller said. “And so many of them believe they’re AI power users, when really all they’re doing is asking it to rewrite a mean email in a slightly nicer tone.”

That habit, she warned, is quietly holding companies back. It’s not just disappointing—it’s expensive.

“When people stay stuck there, your annual AI subscriptions become almost worthless,” she added, urging leaders to rethink how they train teams and measure AI success.

Her concerns are supported by real data. A November study from Cornerstone OnDemand revealed a growing “shadow AI economy” inside companies. While 80% of employees admit to using AI at work, fewer than half have received any formal training. Many are experimenting on their own, unsure, curious, and often afraid of doing the wrong thing.

To unlock real value, Miller encouraged organizations to move beyond microtasks and embrace the next three modes: Companion, Delegate, and most importantly, AI as a Teammate.

In teammate mode, AI doesn’t just wait for instructions. It becomes part of the workflow—sitting in meetings, answering questions, and even taking action. Miller pointed to how engineers at OpenAI already treat Codex, their software engineering agent, as a coworker inside Slack.

A delegate might handle something like cleaning up an inbox or completing a 40-minute task. But a teammate is different. It changes how work happens altogether.

“We’re heading toward a future where we’re not constantly prompting AI,” Miller predicted. “AI will be prompting us—because it will already live inside our systems and support the entire team.”

Even for companies that don’t build AI products, this shift matters. When AI becomes embedded into daily workflows, it stops feeling like a novelty and starts becoming a true productivity engine.

“The real power,” Miller emphasized, “is when AI lifts up the whole system, not just one person.”

To help companies move forward without fear, Miller introduced the idea of Minimum Viable Autonomy (MVA). Inspired by the concept of a minimum viable product, MVA encourages leaders to stop obsessing over perfect prompts and start focusing on goals instead.

“We’re not giving AI 18-page instructions anymore,” she explained. “We give it goals, boundaries, and rules—and let it work backward from there.”

To keep things safe and controlled, she suggested using clear “agent protocols.” These rules define what AI should always do, what it should ask permission for, and what it should never do. She also proposed a balanced risk strategy: 70% of AI work on low-risk tasks, 20% on complex cross-team work, and 10% on bold, strategic tasks that can reshape how an organization operates.


The Warning for the Next Decade

Miller ended her talk with a mix of excitement and warning. She predicted that within months, AI systems will be able to work autonomously for more than eight hours at a time. As costs fall, companies will stop running single AI queries and start running hundreds of thousands of simulations before launching products.

But for leaders who cling to old habits, the future may feel uncomfortable.

“The most important question won’t be whether AI is impressive,” Miller concluded. “It will be whether it’s good—ethical, safe, and aligned with human goals.”

She paused before her final thought.

“AI isn’t just another tool,” she said. “And the companies that keep treating it like one are going to spend the next decade wondering what they missed.”