OpenAI Unveils GPT-OSS 120B and 20B Ahead of GPT-5 Launch

OpenAI releases two open models under Apache 2.0 licence, aiming to reclaim open AI dominance before GPT-5 debut.
In a move that has stirred excitement across the tech world, OpenAI has launched two powerful open-weight models — GPT-OSS 120B and GPT-OSS 20B — just ahead of the anticipated release of GPT-5. This marks OpenAI’s first open model release since GPT-2 over five years ago and represents a strategic pivot back into the open-source ecosystem.
Available for download on Hugging Face under the permissive Apache 2.0 licence, these models are being positioned as powerful tools for developers, startups, and enterprises looking to build agent-style AI systems with strong reasoning capabilities — all without incurring hefty licensing costs.
Optimized for Versatility and Accessibility
The two new models serve distinct use cases. The larger GPT-OSS 120B is optimized for deployment on a single Nvidia GPU, while the more compact GPT-OSS 20B can operate on standard laptops with just 16GB RAM. Both models focus purely on text, lacking multimodal capabilities such as image or audio generation.
Despite their size, both models are tailored for sophisticated reasoning workflows. They can also act as intelligent agents by routing user queries to more capable closed-source OpenAI models via API, effectively serving as smart intermediaries.
At the core of these models is a Mixture-of-Experts (MoE) architecture that allows only a small subset of parameters — around 5.1 billion for the 120B version — to activate for any given token. This design boosts performance and efficiency, while post-training with high-compute reinforcement learning further aligns the models with OpenAI’s higher-tier "o-series" models.
Benchmarks and Limitations
OpenAI claims these models set a new benchmark for open-weight systems. On Codeforces, a popular programming benchmark, GPT-OSS 120B scored 2622 and the smaller 20B scored 2516 — both outperforming rivals like DeepSeek R1, though still trailing OpenAI’s o3 and o4-mini models.
However, accuracy remains a concern. According to TechCrunch, hallucination rates on OpenAI’s PersonQA benchmark were 49% for GPT-OSS 120B and 53% for the 20B variant — far higher than the 16% hallucination rate seen with the older o1 model and even above the 36% mark for o4-mini. OpenAI attributes this to reduced parameter activation and narrower world knowledge in smaller open models — a known trade-off compared to frontier systems.
Safety, Licensing, and Strategic Positioning
OpenAI addressed safety concerns in an accompanying white paper, noting that both internal and third-party evaluations were conducted to assess risks like cyber misuse or biosecurity threats. While GPT-OSS might marginally boost a bad actor's knowledge, OpenAI concluded it doesn’t reach the “high capability” danger threshold even after fine-tuning.
Unlike some open-source labs, OpenAI has opted not to release training datasets — likely due to ongoing legal challenges around copyright in AI model training. Still, the Apache 2.0 licence allows for unrestricted commercial and personal use, giving developers more freedom than ever.
This release also reflects OpenAI’s changing stance on openness. After years of maintaining a proprietary grip, CEO Sam Altman acknowledged that the company may have been “on the wrong side of history” regarding transparency. With growing pressure from the U.S. administration and intensifying competition from Chinese tech firms like DeepSeek, Moonshot AI, and Alibaba’s Qwen, this open release is a clear bid to reclaim leadership in the global AI race.















