OpenAI's new o3 and o4-mini models are all about 'thinking with images'

A mere two days after announcing GPT-4.1, OpenAI is releasing not one but two new models. The company today announced the public availability of o3 and o4-mini. Of the former, OpenAI says o3 is its most advanced reasoning model yet, with it showing "strong performance" in coding, math and science tasks. As for o4-mini, OpenAI is billing it as a lower cost alternative that still delivers "impressive results" across those same fields. More notably, both models offer novel capabilities not found in OpenAI's past systems. For first time, the company's reasoning models can independently use all of the tools available in ChatGPT, including web browsing and image generation. The company says this capability allows o3 and o4-mini solve challenging, multi-step problems more effectively, and "take real steps toward acting independently."  At the same time, o3 and o4-mini can not just see images, but also interpret and "think" about them in a way that significantly extends their visual processing capabilities. For instance, you can upload images of whiteboards, diagrams or sketches — even poor quality ones — and the new models will understand them. They can also adjust the images as part of how they reason.   Separately, OpenAI is releasing a new coding agent (à la Claude Code) named Codex CLI. It's designed to give developers a minimal interface they can use to link OpenAI's models with their local code. Out of the box, it works with o3 and o4-mini, with support for GPT-4.1 on the way.  Today's announcement comes after OpenAI CEO Sam Altman said the company was changing course on the roadmap he detailed in February. At the time, Altman indicated OpenAI would not release o3, which the company first previewed late last year, as a standalone product. However, at the start of April, he announced a "change of plans," noting OpenAI was moving forward with the release of o3 and o4-mini.   "There are a bunch of reasons for this, but the most exciting one is that we are going to be able to make GPT-5 much better than we originally though," he wrote on X. "We also found it harder than we thought it was going to be to smoothly integrate everything. and we want to make sure we have enough capacity to support what we expect to be unprecedented demand." That means the streamlining Altman promised in February will likely need to wait until at least the release of GPT-5, which he said would arrive sometime in the next "few months."  In the meantime, ChatGPT Plus, Pro and Team users can begin using o3 and o4-mini starting today. Sometime in the next few weeks, OpenAI will bring online o3-pro, an even more powerful version of its flagship reasoning model, and make it available to Pro subscribers. For the time being, those users can continue to use o1-pro. This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o3-and-o4-mini-models-are-all-about-thinking-with-images-170043465.html?src=rss

Apr 16, 2025 - 18:07
 0
OpenAI's new o3 and o4-mini models are all about 'thinking with images'

A mere two days after announcing GPT-4.1, OpenAI is releasing not one but two new models. The company today announced the public availability of o3 and o4-mini. Of the former, OpenAI says o3 is its most advanced reasoning model yet, with it showing "strong performance" in coding, math and science tasks. As for o4-mini, OpenAI is billing it as a lower cost alternative that still delivers "impressive results" across those same fields.

More notably, both models offer novel capabilities not found in OpenAI's past systems. For first time, the company's reasoning models can independently use all of the tools available in ChatGPT, including web browsing and image generation. The company says this capability allows o3 and o4-mini solve challenging, multi-step problems more effectively, and "take real steps toward acting independently." 

At the same time, o3 and o4-mini can not just see images, but also interpret and "think" about them in a way that significantly extends their visual processing capabilities. For instance, you can upload images of whiteboards, diagrams or sketches — even poor quality ones — and the new models will understand them. They can also adjust the images as part of how they reason.  

Separately, OpenAI is releasing a new coding agent (à la Claude Code) named Codex CLI. It's designed to give developers a minimal interface they can use to link OpenAI's models with their local code. Out of the box, it works with o3 and o4-mini, with support for GPT-4.1 on the way. 

Today's announcement comes after OpenAI CEO Sam Altman said the company was changing course on the roadmap he detailed in February. At the time, Altman indicated OpenAI would not release o3, which the company first previewed late last year, as a standalone product. However, at the start of April, he announced a "change of plans," noting OpenAI was moving forward with the release of o3 and o4-mini.  

"There are a bunch of reasons for this, but the most exciting one is that we are going to be able to make GPT-5 much better than we originally though," he wrote on X. "We also found it harder than we thought it was going to be to smoothly integrate everything. and we want to make sure we have enough capacity to support what we expect to be unprecedented demand."

That means the streamlining Altman promised in February will likely need to wait until at least the release of GPT-5, which he said would arrive sometime in the next "few months." 

In the meantime, ChatGPT Plus, Pro and Team users can begin using o3 and o4-mini starting today. Sometime in the next few weeks, OpenAI will bring online o3-pro, an even more powerful version of its flagship reasoning model, and make it available to Pro subscribers. For the time being, those users can continue to use o1-pro. This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o3-and-o4-mini-models-are-all-about-thinking-with-images-170043465.html?src=rss