OpenAI has effectively canceled the release of o3, which was slated to be the company’s next major AI model release, in favor of what CEO Sam Altman is calling a “simplified” product offering.
In a post on X on Wednesday, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in its AI-powered chatbot platform ChatGPT and API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.
The company originally said in December that it planned to launch o3 sometime early this year.
“We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings,” Altman said. “We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten. We hate the model picker [in ChatGPT] as much as you do and want to return to magic unified intelligence.”
Altman also announced that OpenAI plans to offer unlimited chat access to GPT-5 at the “standard intelligence setting,” subject to “abuse thresholds,” once the model is made available. (Altman declined to provide more detail on what this setting and these abuse thresholds entail.) Subscribers to ChatGPT Plus will be able to run GPT-5 at a “higher level of intelligence,” Altman said, while ChatGPT Pro subscribers will be able to run GPT-5 at an “even higher level of intelligence.”
“These models will incorporate voice, canvas, search, deep research, and more,” Altman said, referring to a range of features OpenAI has launched in the past few months. “[A] top goal for us is to unify [our] models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.”
Before GPT-5 launches, OpenAI in the next several weeks will release GPT-4.5, code-named “Orion,” which Altman says will be the company’s last “non-chain-of-thought model.” Unlike o3 and OpenAI’s other so-called reasoning models, non-chain-of-thought models tend to be less reliable in domains like math and physics.