OpenAI has launched its o1 model, part of the innovative Strawberry project, designed to enhance reasoning in AI. The o1 model surpasses previous versions, outperforming human PhDs in math and science, and delivering remarkable results in programming tasks.
OpenAI Unveils o1 and o1-mini Models
On Thursday, OpenAI, which has backing from Microsoft, announced the release of its "Strawberry" line of artificial intelligence models. These models are built to spend more time processing query replies in order to tackle challenging challenges.
The models, which were first reported by Reuters, can answer more difficult issues in science, coding, and arithmetic than earlier models, according to the AI firm's blog post.
o1 and o1-mini Unveiled as Part of Strawberry Project
Inside the company, OpenAI referred to the project as Strawberry, while the models that were unveiled on Thursday were called o1 and o1-mini. The business said on Thursday that the o1 will be accessible through ChatGPT and its API.
One of OpenAI's researchers, Noam Brown, who worked on ways to improve the models' reasoning, verified in an X post that the models were identical to the Strawberry project.
The o1 model outperformed its predecessor, GPT-4o, by 83% on the International Mathematics Olympiad qualification test, according to OpenAI's blog post.
AI Model Surpasses PhDs in Science and Coding Competitions
The business also boasted that the model outperformed humans with PhD degrees on a standard set of scientific challenges and increased performance on competitive programming questions.
According to Brown, the models achieved these results by using a method called "chain-of-thought" reasoning. This method entails dividing complicated problems into smaller, more manageable parts.
AI Now Automatically Deconstructs Complex Problems
Scientists have shown that using the method as a prompting tactic helps artificial intelligence models perform better on complicated challenges. Thanks to OpenAI's automation, models can now deconstruct problems automatically without any input from the user.
"We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes," as stated by OpenAI.


Alphabet’s Massive AI Spending Surge Signals Confidence in Google’s Growth Engine
Australian Scandium Project Backed by Richard Friedland Poised to Support U.S. Critical Minerals Stockpile
Nvidia, ByteDance, and the U.S.-China AI Chip Standoff Over H200 Exports
Once Upon a Farm Raises Nearly $198 Million in IPO, Valued at Over $724 Million
Uber Ordered to Pay $8.5 Million in Bellwether Sexual Assault Lawsuit
SpaceX Updates Starlink Privacy Policy to Allow AI Training as xAI Merger Talks and IPO Loom
SpaceX Pushes for Early Stock Index Inclusion Ahead of Potential Record-Breaking IPO
TSMC Eyes 3nm Chip Production in Japan with $17 Billion Kumamoto Investment
Sam Altman Reaffirms OpenAI’s Long-Term Commitment to NVIDIA Amid Chip Report
SpaceX Prioritizes Moon Mission Before Mars as Starship Development Accelerates
Instagram Outage Disrupts Thousands of U.S. Users
Palantir Stock Jumps After Strong Q4 Earnings Beat and Upbeat 2026 Revenue Forecast
Anthropic Eyes $350 Billion Valuation as AI Funding and Share Sale Accelerate
Toyota’s Surprise CEO Change Signals Strategic Shift Amid Global Auto Turmoil
Nvidia CEO Jensen Huang Says AI Investment Boom Is Just Beginning as NVDA Shares Surge 



