Back to home
TECHNOLOGY15 May 2026
Mira Murati’s Vision: Keeping Humans in the AI Loop
Mira Murati, founder of The Thinking Machines Lab and former OpenAI chief, tells WIRED she is building AI that collaborates with people rather than replaces them. The approach signals a strategic pivot toward augmentative intelligence in a sector increasingly wary of automation excess.
La
La Rédaction
The Vertex
5 min read

Source: www.wired.com
When WIRED sat down with Mira Murati, the former chief technology officer of OpenAI and now founder of The Thinking Machines Lab, she made clear that her ambition is not to automate people out of work but to weave artificial intelligence into the fabric of human collaboration.
Murati’s stance reflects a broader shift in the AI community from pure automation toward augmentative systems that keep humans “in the loop.” By designing models that can solicit feedback, verify facts, and co‑author content, she aims to preserve agency while harnessing the productivity gains of large language models. This could mitigate the feared displacement of creative and analytical labor, yet it also raises questions about accountability, bias propagation, and the economic value of human judgment.
The Thinking Machines Lab emerges at a moment when generative AI has already begun reshaping journalism, software development, and scientific research. OpenAI’s original charter emphasized safe, shared progress, but recent product launches have prioritized speed over deliberation. Murati’s emphasis on human‑in‑the‑loop aligns with emerging safety frameworks and the growing regulatory scrutiny of opaque model behavior, positioning her venture as a bridge between experimental research and responsible deployment.
If adopted widely, human‑centred AI could redefine workflows across sectors, turning AI from a replacement tool into a collaborative partner. The challenge will lie in aligning incentives—ensuring that companies reward transparent, verifiable contributions rather than mere efficiency gains. As policy evolves and user trust deepens, Murati’s vision may set a precedent for a more balanced, human‑aligned AI ecosystem. Such a paradigm may also influence public policy, encouraging more inclusive standards for AI deployment.