OpenAI Launches o3-mini AI Inference Model, Free Users Can Also Experience It

Official Launch of o3-mini AI Inference Model
On February 1st, Sam Altman, CEO of OpenAI, officially launched the o3-mini AI inference model in ChatGPT and API services after announcing it two weeks ago. For the first time, a limited-rate version is available to free users of ChatGPT.
Performance Improvement and Inference Process Display
The o3-mini model shows significant performance improvements, with a 24% faster response time than the previous o1-mini model and improved answer accuracy. Similar to o1-mini, o3-mini displays the inference process rather than just providing answers, allowing users to better understand the AI's thought process.
How Developers Can Use It
Developers can use o3-mini via OpenAI's API services, including Chat Completions API, Assistants API, and Batch API, for a variety of application development. This gives developers more options and powerful tools to build intelligent applications.
User Levels and Usage Limits
In ChatGPT, o3-mini defaults to a medium inference strength. Paid users can also choose o3-mini-high for a smarter version, though the response time may be slightly longer. Pro users can use o3-mini and o3-mini-high without restrictions.
ChatGPT Plus, Team, and Pro users can access o3-mini. OpenAI has also increased the daily message limit for Plus and Team users threefold, allowing up to 150 messages per day. Only Pro users who pay $200 per month (approximately 1453 RMB) can use o3-mini without limits.
First Time for Free Users to Experience Inference Models
For the first time, free users of ChatGPT can experience OpenAI's inference models by simply selecting the "Reason" feature in the chat bar, with rate limits similar to the existing GPT-4o limits. This initiative allows more users to access advanced AI technology and experience its powerful reasoning capabilities.
Conclusion
OpenAI’s o3-mini AI inference model, with significant improvements in performance and accuracy, is now available for free users to experience. Both developers and various user levels can access this new model in different ways, bringing more convenience and possibilities for future intelligent application development and daily use.