Alibaba’s Qwen team released another artificial intelligence (AI) model to the Qwen 2.5 family on Monday. Dubbed Qwen 2.5-VL-32B Instruct, the AI model comes with improved performance and optimisations. It is a vision language model with 32 billion parameters, and joins the three billion, seven billion, and 72 billion parameter size models in the Qwen 2.5 family. Just like all previous models by the team, it is also an open-source AI model available under a permissive license.
Alibaba Releases Qwen 2.5-VL-32B AI Model
In a blog post, the Qwen team detailed the company’s latest vision language model (VLM). It is more capable than the Qwen 2.5 3B and 7B models, and smaller than the foundation 72B model. The large language model’s (LLM) older versions outperformed DeepSeek-V3, and the 32B model is said to be outperforming Google and Mistral’s similar sized systems.
Coming to its features, the Qwen 2.5-VL-32B-Instruct has an adjusted output style that provides more detailed and better-formatted responses. The researchers claimed that the responses are closely aligned with human preferences. Mathematical reasoning capability has also been improved, and the AI model can solve more complex problems.
The accuracy of image understanding capability and reasoning-focused analysis, including image parsing, content recognition, and visual logic deduction, has also been improved.
Qwen 2.5-VL-32B-Instruct
Photo Credit: Qwen
Based on internal testing, the Qwen 2.5-VL-32B is claimed to have surpassed the capabilities of comparable models, such as Mistral-Small-3.1-24B and Google’s Gemma-3-27B, on the MMMU, MMMU-Pro, and MathVista benchmarks. Interestingly, the LLM was also claimed to have outperformed the much larger Qwen 2-VL-72B model on the MM-MT-Bench.
The Qwen team highlights that the latest model can directly play as a visual agent that can reason and direct tools. It is inherently capable of computer use and phone use. It accepts text, images, and videos with more than one hour of duration as input. It also supports JSON and structured outputs.
The baseline architecture and training remain the same as the older Qwen 2.5 models, however, the researchers implemented a dynamic fps sampling to enable the model to comprehend videos at varying sampling rates. Another enhancement also lets it pinpoint specific moments in a video by gaining an understanding of temporal sequence and speed.
Qwen 2.5-VL-32B-Instruct is available to download on GitHub and its Hugging Face listing. The model comes with Apache 2.0 licence, which allows both academic and commercial usage.