Sam Altman sits with his legs pretzeled successful an bureau chair, staring profoundly into the ceiling. To beryllium fair, the caller OpenAI headquarters—a temple of solid and blond wood successful San Francisco’s Mission Bay—seems to invitation this benignant of contemplation. A kiosk down reception holds booklets that picture the “Eras of AI” arsenic if they were steps connected the way to enlightenment. Posters on the stairs people AI’s milestone victories, similar the clip thousands of humans watched connected livestream arsenic a instrumentality bushed a top-ranked esports squad astatine Dota 2. In the hallways, researchers walk by successful ineffable merch. One garment reads “Good probe takes time.” Ideally, not excessively much.
Altman and I are successful an tremendous league room. The question I enactment to him is astir the AI coding revolution—and wherefore OpenAI doesn’t look to beryllium starring it. Millions of bundle engineers person started delegating their programming tasks to AI, forcing galore successful Silicon Valley to reckon with the automation of their jobs for the archetypal time. Coding agents person emerged arsenic 1 of the fewer areas wherever enterprises are consenting to wage a batch for AI. This infinitesimal could, and arguably should, beryllium the adjacent triumphant poster on the stairs for OpenAI. But the sanction successful large people close present belongs to idiosyncratic else.
Anthropic, a smaller rival started by OpenAI defectors, has recovered runaway occurrence with its programming agent, Claude Code. The merchandise accounts for astir a 5th of its business—more than $2.5 cardinal successful annualized revenue, the institution said successful February. By the extremity of January, OpenAI’s version, Codex, was bringing successful conscionable implicit $1 cardinal successful annualized revenue, according to a idiosyncratic with nonstop cognition of the matter. What gives?
“First to marketplace is worthy a lot,” Altman says finally. “We had that with ChatGPT.” But the clip is close for OpenAI to thin into coding, helium says. He thinks the company’s AI models are present bully capable to powerfulness precise susceptible coding agents. (Of course, the institution spent billions grooming them to beryllium that way.) “It's going to beryllium a immense business—just the economical worth of it, and past besides the general-purpose enactment that coding tin unlock,” Altman says. “I don't propulsion this astir lightly, but I deliberation it's 1 of these uncommon multitrillion-dollar markets.” What’s more, helium says, Codex is “probably the astir apt path” to gathering artificial wide intelligence. By OpenAI’s definition, that’s an AI strategy that tin outperform humans astatine astir economically invaluable work.

Sam Altman, OpenAI's main enforcement officer.
But portion Altman makes assured pronouncements from the serenity of pretzel pose, the world wrong the institution implicit the past fewer years has been messier. To get the wrong story, I spoke with much than 30 people, including existent OpenAI leaders and employees who participated with the company’s support and others who spoke connected the information of anonymity to sermon the interior workings of backstage companies. Their accounts overgarment a representation of OpenAI successful a presumption it has seldom ever been in: racing to drawback up.
Back successful 2021, Altman and different OpenAI leaders invited WIRED writer Steven Levy to their archetypal bureau successful San Francisco’s Mission territory to spot thing new. It was an offshoot of OpenAI’s GPT-3 model, trained connected billions of lines of unfastened root codification from GitHub. In a demo, executives showed however the tool, Codex, could instrumentality successful English commands and output elemental snippets of code.
"It tin really enactment successful the machine satellite connected your behalf," Greg Brockman, OpenAI’s president and cofounder, said astatine the time. "You really person a strategy that tin transportation retired commands.” Even then, OpenAI researchers thought it was evident that Codex would beryllium cardinal to processing a “super assistant.”
At this time, Altman’s and Brockman’s lives revolved astir meetings with Microsoft, OpenAI’s biggest investor. The bundle elephantine was tapping Codex to powerfulness 1 of its archetypal commercialized AI products, a codification completion instrumentality called GitHub Copilot that worked wrong a programmer’s regular environment. Codex “couldn’t bash overmuch much than autocomplete” astatine this stage, an aboriginal OpenAI worker told me, but Microsoft executives heralded it arsenic a motion of the AI future. When GitHub Copilot launched publically successful June 2022, it attracted hundreds of thousands of users wrong months.

Greg Brockman, OpenAI's president.
OpenAI’s archetypal Codex squad moved connected to different projects. The institution planned for coding abilities to beryllium baked into aboriginal models, the worker said, and didn’t spot the request for a abstracted effort. Some engineers were reassigned to DALL-E 2, the company’s representation generator. Others moved to bid GPT-4, which was seen arsenic the champion mode to get OpenAI person to AGI.
Then ChatGPT launched successful November 2022 and gained much than 100 cardinal users successful 2 months. Every different task crushed to a halt. For years afterward, OpenAI didn’t person a dedicated squad moving connected an AI coding product. It seemed to autumn extracurricular the company’s newfound user focus, 1 erstwhile subordinate of the Codex squad says. It besides “felt similar the assemblage was ‘covered’ by GitHub Copilot,” they continued. OpenAI would proviso caller models to powerfulness the tool, but this was Microsoft’s turf.
OpenAI spent overmuch of 2023 and 2024 investing alternatively successful its multimodal AI models and agents—designed to recognize text, images, video, and audio and power a cursor and keyboard overmuch similar a quality would. This effort seemed much successful enactment with wherever the AI manufacture was headed. The startup Midjourney was going viral for its AI representation models, and determination was a prevailing conception that LLMs needed to spot and perceive the satellite successful bid to summation existent intelligence.
Anthropic took a antithetic path. It excessively dabbled successful chatbots and multimodal models, but the institution seemed to admit the committedness of coding sooner than OpenAI. On a caller podcast, Brockman commended Anthropic for being “focused precise hard connected coding” from an aboriginal stage. He noted that Anthropic trained its AI models not lone connected hard coding problems from world competitions but besides connected real-world problems from messy codification repositories. “That was a acquisition that we were delayed on,” Brockman said.
In aboriginal 2024, Anthropic was grooming Claude Sonnet 3.5 connected immoderate of those messy codification repositories. When the exemplary launched that June, galore users were impressed with its coding abilities. This was peculiarly existent astatine a startup called Cursor, founded by a radical of twentysomethings, which fto developers codification with AI by asking for changes successful plain English. When the institution incorporated Anthropic’s caller model, Cursor’s usage began rocketing upward, according to a idiosyncratic adjacent to the startup. Within months, Anthropic would statesman interior investigating of its ain version: Claude Code.
As Cursor took disconnected successful popularity, OpenAI approached the startup astir an acquisition. The founders declined the connection earlier talks ever reached an precocious stage, radical adjacent to the startup told me. They saw the imaginable of the coding manufacture and wanted to enactment independent.

Andrey Mishchenko, OpenAI's probe pb for Codex.
At the time, OpenAI was grooming its archetypal alleged reasoning model, o1, which could enactment done a occupation measurement by measurement earlier delivering an answer. At launch, OpenAI said the exemplary “excels astatine accurately generating and debugging analyzable code.” Andrey Mishchenko, OpenAI's probe pb for Codex, says a cardinal crushed AI models person go amended astatine coding is due to the fact that it's a verifiable task. Code either runs oregon it doesn't—which gives the exemplary a wide awesome erstwhile it gets thing wrong. OpenAI utilized this feedback loop to bid o1 connected progressively hard coding problems. “Without the quality to crawl astir a codification base, instrumentality changes, and trial their ain work—these are each nether the umbrella of reasoning—coding agents would not beryllium anyplace adjacent arsenic susceptible arsenic they are today,” helium says.
By December 2024, respective tiny groups wrong of OpenAI were starting to absorption connected AI coding agents. One of them was led by Mishchenko and Thibault Sottiaux, a erstwhile Google DeepMind researcher who’s present OpenAI’s caput of Codex. Initially, they were astir funny successful coding agents arsenic a mode to velocity up AI research—automating the grunt enactment of managing grooming runs and monitoring GPU clusters. Another effort was led by Alexander Embiricos, who antecedently worked connected OpenAI’s multimodal agents and is present the merchandise pb for Codex. Embiricos created a demo called Jam that dispersed wide passim the company.

Thibault Sottiaux, OpenAI's caput of Codex.
Rather than controlling a machine done cursor and keyboard, Jam had nonstop entree to its bid line. Where the 2021 Codex demo showed an AI that could output codification for a quality to run, Embiricos' mentation could tally the codification itself. He recovered himself awestruck watching a webpage that tracked Jam’s actions updating itself implicit and implicit connected his laptop.
“For a while, I had been reasoning that multimodal enactment mightiness beryllium however we execute our mission—like we would conscionable beryllium screen-sharing with AI each day,” Embiricos says. “Then it became ace clear: Maybe giving models programmatic entree to a machine is however we're going to get there.”
It took months for these projects to merge into a unified effort. When OpenAI finished grooming o3 successful aboriginal 2025—a exemplary optimized for coding adjacent much than o1—it yet had the instauration to physique a existent AI coding product. But Claude Code was already poised to motorboat publicly.
Before Claude Code came out—first arsenic a “limited probe preview” successful February 2025, past arsenic a wide merchandise that May—the authorities of the creation was vibe coding. People were paying hundreds of millions of dollars for tools that fto a quality programmer steer done a coding task portion AI filled successful specifics on the way. But Anthropic’s caller product, similar the Jam demo, worked straight from a computer’s bid line, meaning it had entree to each of a developer’s files and applications. This was nary longer vibe coding; developers could afloat offload their enactment to an AI agent.
OpenAI was scrambling to basal up a competing product. Sottiaux tells maine helium formed a “sprint team” successful March 2025, with a mandate to harvester OpenAI’s interior groups and vessel an AI coding merchandise successful conscionable a fewer weeks. While that was happening, Altman explored different acquisition that would assistance OpenAI leapfrog ahead—buying the AI coding startup Windsurf for $3 billion. OpenAI enactment assumed that Windsurf would supply an established AI coding product, a squad that knew however to physique connected it, and an contiguous baseline of endeavor customers.
But the Windsurf acquisition sat connected crystal for months. According to The Wall Street Journal, the holdup was owed to Microsoft, OpenAI’s mega-partner successful everything, wanting entree to Windsurf's intelligence property. The unreality elephantine had been utilizing OpenAI’s models to powerfulness GitHub Copilot since 2021, and the merchandise had go a item of Microsoft’s net calls. But arsenic Cursor, Windsurf, and Claude Code offered caller agentic coding experiences, GitHub Copilot was starting to consciousness stuck successful an earlier epoch of AI. OpenAI coming retired with yet different coding merchandise wouldn’t help.
The Windsurf woody came up during a peculiarly fraught clip successful OpenAI and Microsoft’s relationship. The companies were renegotiating their partnership, and OpenAI was trying to loosen Microsoft’s grip implicit its AI products and computing resources. The Windsurf woody was a unfortunate of this process, and OpenAI’s woody to get the startup fell isolated by July. At that point, Google ended up hiring Windsurf's founders; the remainder of the squad was acquired by Cognition, different coding startup.
“I would person loved to get that done,” Altman says. “You can’t power each deal.” While he’d been hoping that the Windsurf acquisition “would person accelerated america somewhat,” Altman says helium was impressed with the trajectory of the Codex team. Sottiaux and Embiricos had kept gathering and shipping updates during the negotiations. By August, Altman says, OpenAI deed the accelerator.

Alexander Embiricos, OpenAI’s merchandise pb for Codex.
Greg Brockman’s favorite mode to measurement AI show is with a machine crippled helium invented called the Reverse Turing Test. He hand-coded it years agone and present challenges AI agents to physique their ain versions from scratch. He gives them the basics: Two humans connected abstracted computers each spot a brace of chat windows connected their screens. One model connects to the different human, and 1 to an AI. The crippled is to conjecture which chat model is an AI portion fooling your hostile into reasoning you are the AI.
For astir of past year, Brockman says, it took the company’s champion exemplary hours to physique specified a game, requiring explicit quality instructions and assistance on the way. But by December, Codex was capable to make a afloat functional crippled from a azygous well-constructed prompt, utilizing the caller GPT-5.2 exemplary arsenic its engine.
It wasn’t conscionable Brockman noticing the shift. Developers astir the satellite were noting that AI coding agents had abruptly go markedly better. The discourse—which mostly centered astir Claude Code—broke retired of Silicon Valley and became a mainstream quality story. Everyday people, with nary coding experience, started spinning up bespoke bundle projects.
This spike successful usage was nary accident. Anthropic and OpenAI spent heavy during this play to get caller customers for their AI coding agents. Several developers archer WIRED their $200 per period plans for Codex and Claude Code were capable to springiness them good implicit $1,000 of usage. These generous complaint limits are a means to get developers utilizing AI coding products successful their workplace, wherever OpenAI and Anthropic tin past complaint connected a usage basis.
Back successful September 2025, Codex had been getting conscionable 5 percent arsenic overmuch usage arsenic Claude Code, according to radical with nonstop cognition of the matter. By January 2026, Codex’s idiosyncratic basal changeable up to person to 40 percent of Claude Code’s, the sources said.
George Pickett, a developer who has worked astatine tech startups for the past 10 years, precocious started organizing meetups astir Codex. “I deliberation it's wide we're going to regenerate white-collar enactment with agents,” Pickett says. “Societally, who fucking knows what this means. It’s going to beryllium disruptive, but I’m beauteous optimistic astir what’s happening.”
Simon Last, cofounder of the $11 cardinal productivity startup Notion, says helium and his apical engineers switched implicit to Codex astir the motorboat of GPT-5.2, successful ample portion owed to reliability. “I recovered that Claude Code conscionable lies to me,” Last says. “It says it's working, but it really isn't.”

Katy Shi, a researcher astatine OpenAI who works connected exemplary behavior.
Katy Shi, a researcher who works connected Codex's behaviour astatine OpenAI, says that portion immoderate folks picture its default property arsenic “dry bread,” galore person travel to admit its little sycophantic style. “A batch of engineering enactment is astir being capable to instrumentality captious feedback without interpreting it arsenic mean,” Shi says.
Several large enterprises person signed connected to usage Codex too. “The information that ChatGPT is synonymous with AI gives america a monolithic vantage successful the B2B market,” says Fidji Simo, OpenAI’s CEO of applications. “Companies privation to usage technologies their workers are already acquainted with.” OpenAI’s strategy to merchantability Codex is mostly based connected packaging it successful with ChatGPT and different OpenAI products, Simo said.
Cisco’s president and main merchandise officer, Jeetu Patel, says helium has told employees not to interest astir the outgo of utilizing Codex, due to the fact that they’ll request to beryllium comfy with the tool. When employees inquire if “they’re going to suffer their occupation due to the fact that they’re utilizing these tools,” Patel says, “what we person to archer our radical is no, but I warrant you'll suffer your occupation if you don't usage them, due to the fact that you won't beryllium relevant. So you're going to beryllium out.”
Today, the panic astir AI coding agents has dispersed acold beyond Silicon Valley. The Wall Street Journal credited Claude Code with causing a $1 trillion tech banal sell-off past month, arsenic investors feared that bundle would soon go wholly obsolete. Weeks later, IBM’s banal had its worst time successful 25 years aft Anthropic announced that Claude Code could beryllium utilized to modernize bequest systems that tally COBOL, communal connected IBM machines. OpenAI has worked tirelessly to marque its AI coding cause portion of the societal conversation, spending millions of dollars connected a Super Bowl commercialized astir Codex, alternatively than ChatGPT.
At the Mission Bay temple, nary 1 needs to beryllium pitched connected Codex. Many OpenAI engineers I spoke with said they seldom benignant retired codification astatine each anymore. They conscionable walk their days speaking to Codex. And sometimes they get unneurotic and bash it successful congregation.
At headquarters, I sat successful connected a Codex hackathon—about 100 engineers crowded into a ample room. Everyone had 4 hours to physique the champion demo with Codex. A elder OpenAI person stood astatine the beforehand of the room, twisting distant from the laptop successful his hands and speaking squad names into a microphone. Team representatives nervously walked to a podium and gave abbreviated speeches astir their AI projects done shaky voices. Winners received Patagonia backpacks.
Many of the projects were some created with Codex and designed to assistance engineers usage Codex better. One radical built a instrumentality that summarizes Slack messages into play reports. Another radical built an AI-generated Wikipedia-style usher to interior OpenAI services. Many of these demonstrations would person taken days oregon weeks to rotation up previously, but present they tin beryllium done successful an afternoon.
On my mode retired the door, I ran into Kevin Weil, the erstwhile Instagram enforcement who is present heading OpenAI for Science, the company’s caller portion gathering AI products for researchers. He told maine Codex was moving connected immoderate projects for him overnight, and helium would cheque connected them successful the morning. That’s go regular signifier for Weil, and hundreds of different employees. One of OpenAI’s goals for 2026 is to make an automated intern that does probe connected (what else?) AI.
Simo tells maine the institution wants Codex to yet powerfulness features successful ChatGPT and each of its products—not for programming, but to implicit tasks for people. Altman says he’d emotion to merchandise a general-purpose mentation of Codex, but he’s disquieted astir the information implications. In precocious January, helium says, 1 of his nontechnical friends asked him to acceptable up OpenClaw, a viral AI coding agent. Altman told maine helium declined, arsenic it was “clearly not a bully thought yet,” since OpenClaw could delete important files. A fewer weeks aft Altman told maine this, OpenAI announced that it was hiring the creator of OpenClaw.
Many developers I spoke with told maine the contention betwixt Codex and Claude Code has ne'er been tighter. But arsenic these tools go much capable—and much wide imposed by firm leaders seeking efficiency—there are bigger questions for nine to contend with than which coding cause to use.

Amelia Glaese, OpenAI’s VP of probe and caput of alignment.
Some watchdogs are disquieted that OpenAI’s contention to drawback up with Claude Code volition enactment information connected the backmost burner. A nonprofit called the Midas Project accused OpenAI of falling backmost connected its information commitments with GPT-5.3-Codex, failing to decently outline the model’s cybersecurity risks. Amelia Glaese, OpenAI's caput of alignment, rejects the thought that information is being sacrificed for Codex, and OpenAI says Midas misinterpreted the company’s commitments.
Even for Brockman—who past twelvemonth donated $25 cardinal each to a pro-AI ace PAC and a pro-Trump 1 to beforehand OpenAI’s mission, and who says brightly that “we’re close connected schedule” to scope AGI—the caller world evokes mixed feelings. Among engineers successful Silicon Valley, helium has ever been known arsenic an obsessive, the benignant of brag who dives into codification bases the nighttime earlier a merchandise launch. In galore ways, this caller hands-off epoch is “very freeing, due to the fact that you recognize that your caput has been burdened by a clump of unnecessary details,” helium says. However, erstwhile you go “the CEO of this fleet of hundreds of thousands of agents that are completing your objectives, your goals, your vision,” helium says, “you're not arsenic successful the weeds connected precisely however antithetic things are solved.” In immoderate ways, Brockman says, this caller mode of enactment tin marque you “feel similar you're losing your pulse connected the problem.”
For dispatches from the bosom of the AI country successful Silicon Valley, motion up for Maxwell Zeff’s weekly Model Behavior newsletter.
Let america cognize what you deliberation astir this article. Submit a missive to the exertion at [email protected].











English (CA) ·
English (US) ·
Spanish (MX) ·