o3-mini is OpenAI's previous-generation small reasoning model — chain-of-thought reasoning for math, science, and coding at $1.10/$4.40 per million tokens. Released early 2025, still available though superseded by o4-mini.
o3-mini is OpenAI's previous-generation small reasoning model, now superseded by o4-mini but still available and widely used. Released in early 2025, it brought chain-of-thought reasoning to a cost-efficient form factor and was notable for its strong performance on math, science, and coding tasks relative to its price. At $1.10 per million input tokens and $4.40 per million output tokens, it remains one of the most affordable ways to access genuine reasoning capability in the OpenAI lineup.
For workloads already integrated with o3-mini, there's no urgent reason to migrate — it continues to perform well on the tasks it was designed for. New workloads should consider o4-mini first, which offers improved coding and visual capabilities at a comparable price point.
o3-miniReasoning: Chain-of-thought reasoning for math, logic, science, and coding problems. At launch in early 2025, it demonstrated strong performance on AIME and GPQA benchmarks, placing it among the top reasoning models at its price tier.
Math & Science: Particularly strong on mathematical reasoning and scientific problem solving — one of its headline use cases at launch.
Coding: Solid coding reasoning capabilities, though o4-mini has since moved ahead on this dimension.
Cost-Efficient Reasoning: At $1.10/$4.40 per million tokens, one of the most affordable reasoning models available from a major lab.
o3-mini has been superseded by o4-mini, which offers better coding performance, visual reasoning, and a larger max output window. It also lacks vision support — unlike o4-mini, o3-mini processes only text input. For new projects, o4-mini is generally the better choice unless the specific $1.10/$4.40 pricing is a requirement.
February 26, 2026