OpenAI released GPT-5.5 on April 23, positioned as "a new class of intelligence for real work" and rolled out across ChatGPT and the Codex app, with API access held back pending additional safeguards. In the bake-off against Anthropic's Opus 4.7 from the prior week, Artificial Analysis crowns GPT-5.5 the top independently validated model in the world on its Intelligence Index, with GPT-5.5 at medium reasoning scoring the same as Opus 4.7 at max reasoning at roughly one quarter of the cost, about $1,200 versus $4,800 per million tokens end to end. Gemini 3.1 Pro Preview scores the same at around $900. Coverage across OpenAI's own system card, Simon Willison's preview notes, Ethan Mollick's early-access write-up, the NVIDIA blog, Latent Space's AI News, and TechCrunch converged on a profile of stronger long-horizon execution, noticeably better agentic coding, broader computer use, and improved token efficiency. Ethan Mollick's illustrative test, a procedurally generated 3D simulation of a harbor town from 3000 BCE to 3000 CE, was only rendered as an evolving town by GPT-5.5 Pro, with the new model completing in 20 minutes what GPT-5.4 Pro took 33 minutes to do.
The release is also a superapp moment for Codex. OpenAI bundled GPT-5.5 with a major Codex refresh that folds in the capabilities of its now-defunct Prism acquisition, adds built-in browser control, and ships with schedules, triggers, plugins, skills, and a unified project / thread / file workspace. Over 10,000 NVIDIA employees across engineering, product, legal, marketing, finance, sales, HR, and operations are already using GPT-5.5-powered Codex inside a Jensen-led company-wide rollout. NVIDIA's disclosure notes that Codex is served on GB200 NVL72 rack-scale systems, which the post frames as delivering roughly 35 times lower cost per million tokens and 50 times higher token output per second per megawatt versus prior-generation hardware, the economics that make enterprise-scale frontier inference viable. The practical signal is that ongoing engineering work with async gaps, pull request opens, continuous integration waits, review rounds, now fits inside one tool.
Pricing lands at about $5 per million input tokens and $30 per million output tokens for GPT-5.5 Pro, which is higher than Opus 4.7, but Artificial Analysis's intelligence-per-dollar curves make the combined offering competitive with the Claude stack on most real workloads. Simon Willison notes one remaining friction: the Codex API endpoint that some agent harnesses such as Pi and Opencode use, the semi-official "/backend-api/codex/responses" path, is now tacitly supported by OpenAI, which has hired the OpenClaw creator and publicly welcomes third parties integrating with ChatGPT subscriptions. This is a direct contrast with Anthropic, which blocked OpenClaw from doing the same with Claude subscriptions earlier in the year. The pattern coming out of the week is that raw 1-dimensional intelligence numbers are giving way to intelligence-per-dollar axes as the real comparison space, with OpenAI, Anthropic, and Google all now jockeying for the best place on that 2D Pareto frontier.
- OpenAI's own announcement frames GPT-5.5 as optimized for coding, research, and data analysis across tools.
- NVIDIA emphasizes that 10,000+ NVIDIA employees are already using GPT-5.5-powered Codex on GB200 NVL72, citing 35× lower cost per million tokens vs prior hardware.
- Simon Willison highlights the missing API access at launch and the OpenClaw/Codex subscription-backdoor dynamic contrasting with Anthropic's recent block.
- Ethan Mollick's procedural-town test found GPT-5.5 Pro was the only model that actually modeled an evolving town rather than replacing buildings in place.
- Latent Space/swyx notes Artificial Analysis's finding: GPT-5.5 (medium) matches Opus 4.7 (max) at ~¼ the cost, but Gemini 3.1 Pro Preview still undercuts both.
- TechCrunch frames it as OpenAI's 'super app' moment, with Codex folding in the defunct Prism's browser-control stack.