Human Typing Speed, Not AI Power, Is Slowing the Path to AGI: OpenAI Codex Lead

Human Typing Speed, Not AI Power, Is Slowing the Path to AGI: OpenAI Codex Lead
X
OpenAI Codex lead says slow human prompting and validation—not AI capability—is the hidden bottleneck delaying Artificial General Intelligence.

Artificial intelligence is advancing at a pace few could have imagined a decade ago. From powerful language models to autonomous agents that can write code, analyse data, and generate creative content, AI is reshaping how work gets done. Yet, despite massive investments running into hundreds of billions of dollars, Artificial General Intelligence (AGI)—AI that can reason and think at a human level—remains out of reach. According to OpenAI Codex lead Alexander Embiricos, the biggest obstacle may not be technological at all, but human.

Speaking on an episode of Lenny’s Podcast, Embiricos argued that the current ceiling on AGI progress is set by the speed at which humans can interact with AI systems. He described “human typing speed” and “human multi-tasking speed on writing prompts” as the “current underappreciated limiting factor” holding back faster breakthroughs in AI development.

Today’s AI workflows still rely heavily on people to write prompts, guide agents, and review outputs. While AI systems can generate results at incredible speed, humans remain responsible for validating and correcting those results—a process that simply does not scale. Embiricos explained, “You can have an agent watch all the work you're doing, but if you don't have the agent also validating its work, then you're still bottlenecked on, like, can you go review all that code?”

In other words, AI may be fast, but humans are not fast enough to keep up. This mismatch creates friction, slowing the feedback loops that are essential for rapid iteration and improvement. As a result, the pace of innovation becomes constrained not by compute power or model intelligence, but by human attention and manual effort.

To overcome this bottleneck, Embiricos believes AI systems must be fundamentally rethought. He argues for a future where AI agents are capable of validating their own work, reducing the need for constant human oversight. As he puts it, “We need to unburden humans from having to write prompts and validate AI's work, since we aren't fast enough.”

His vision involves building systems where AI agents are “default useful” from the start—capable of acting autonomously while maintaining reliability. If achieved, this shift could lead to what Embiricos describes as “hockey stick” growth, a sharp upward curve in productivity driven by automation. “If we can rebuild systems to let the agent be default useful, we'll start unlocking hockey sticks,” he said.

Still, Embiricos cautioned that there is no single solution that will work across all industries or applications. Fully automated workflows will require careful, case-by-case design. Even so, he is optimistic about what lies ahead.

Looking to the near future, Embiricos predicts that early adopters will be the first to experience dramatic gains. “Starting next year, we're going to see early adopters starting to hockey stick their productivity, and then over the years that follow, we're going to see larger and larger companies’ hockey stick that productivity,” he said.

Ultimately, Embiricos believes that once these productivity gains flow back into AI research itself, the conditions for AGI will finally emerge. “That hockey-sticking will be flowing back into the AI labs, and that's when we'll basically be at the AGI.”

Next Story
Share it