Alibaba launches Qwen3 ‘hybrid’ AI model to challenge OpenAI, Google
Qwen3 models supports 119 languages, trained on 36 trillion tokens sourced from textbooks, code, datasets, AI material
Alibaba Group unveiled Qwen3, a new family of large language models designed to compete with leading AI systems from OpenAI and Google.
Eight models with 0.6 billion to 235 billion parameters are included in the release, which combines dense and mixture-of-experts (MoE) architectures.
According to the Chinese tech giant, Qwen3 performs on par with or better than Google’s Gemini 2.5 Pro and OpenAI’s o3-mini in a number of important benchmarks, such as coding, mathematical reasoning, and complex problem-solving.
The 36 trillion tokens used to train the Qwen3 models—which support 119 languages—came from question-answer datasets, code, textbooks, and AI-generated content.
Alibaba, in contrast to some of its rivals, has made a number of Qwen3 models open-weight and accessible for download on GitHub and Hugging Face.
For the time being, the flagship Qwen-3-235B-A22B model, which received the highest test scores, is still limited.
In order to maximize accuracy, Qwen3’s “hybrid reasoning” approach enables users to switch between slower, deeper reasoning modes and faster, non-reasoning outputs. Alibaba claims that this adaptability improves user control over AI operations and efficiency.
After recent launches by DeepSeek and Baidu, the release increases competition in China’s AI market.
Future model training may be impacted by the increased US export restrictions on sophisticated chips to China.
Qwen3 will give companies new alternatives to proprietary US AI systems by being made available through cloud providers like Fireworks AI and Hyperbolic.
According to industry watchers, Alibaba’s action portends a swift global convergence of open and closed AI models.
“Qwen3’s performance demonstrates that open models are keeping pace,” stated Baseten CEO Tuhin Srivastava.
Alibaba claims that Qwen3 marks a substantial improvement in reasoning ability, coding proficiency, and multilingual support compared to Qwen2.5-Max, which was first released in January.