
OpenAI introduces two open-weight AI reasoning models
06 Aug 2025, 10:36 AMAs part of the launch Microsoft is also bringing GPU-optimized versions of the gpt-oss-20b model to Windows devices.
Team Head&Tale
OpenAI has launched two open-weight AI reasoning models with similar capabilities to its o-series.
The two models -- gpt-oss-120b and gpt-oss-20b -- are freely available for download on Hugging Face and come natively quantized in MXFP4, it said in a statement.
The gpt-oss-120b model runs within 80GB of memory, while gpt-oss-20b only requires 16GB.
The gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks while the gpt-oss-20b model delivers similar results to OpenAI o3‑mini on common benchmarks, it claimed.
It further explained that the gpt-oss-120b outperforms OpenAI o3‑mini and matches or exceeds OpenAI o4-mini on competition coding, general problem solving and tool calling besides doing even better than o4-mini on health-related queries and competition mathematics. Similarly, the gpt-oss-20b matches or exceeds OpenAI o3‑mini on these same evaluations, despite its small size, it added.
As part of the launch Microsoft is also bringing GPU-optimized versions of the gpt-oss-20b model to Windows devices, it said.
In the early days, OpenAI favoured open sourced AI models but it shifted to a closed source development approach.
But with increasing competition from Chinese AI labs such as DeepSeek, Alibaba's Qwen and Moonshot AI that have come up with popular open models, OpenAI maybe changing gears.
Earlier in February, OpenAI CEO Sam Altman had said that he personally thought that company has been on the wrong side of history in terms of its open source strategy while not everyone at OpenAI shared his view.