-
March 22, 2025.
We release our models R1-VL-7B and R1-VL-2B. -
March 17, 2025.
We release our paper on arxiv.
Recent studies generally enhance MLLMs' reasoning capabilities via supervised fine-tuning on high-quality chain-of-thought reasoning data, which often leads models to merely imitate successful reasoning paths without understanding what the wrong reasoning paths are. In this work, we aim to enhance the MLLMs’ reasoning ability beyond passively imitating positive reasoning paths. To this end, we design Step-wise Group Relative Policy Optimization (StepGRPO), a new online reinforcement learning framework that enables MLLMs to self-improve reasoning ability via simple, effective and dense step-wise rewarding. Specifically, StepGRPO introduces two novel rule-based reasoning rewards: Step-wise Reasoning Accuracy Reward (StepRAR) and Step-wise Reasoning Validity Reward (StepRVR). StepRAR rewards the reasoning paths that contain necessary intermediate reasoning steps via a soft key-step matching technique, while StepRAR rewards reasoning paths that follow a well-structured and logically consistent reasoning process through a reasoning completeness and logic evaluation strategy. With the proposed StepGRPO, we introduce R1-VL, a series of MLLMs with outstanding capabilities in step-by-step reasoning.
We use VLMEvalKit to evaluate our models on different benchmarks. Here, we provide the evaluation instructions.
First, install VLMEvalKit according to the official instructions.
Then, replace necessary files as in Mulberry.
Finally, perform evaluation via following command:
python run.py --data MathVista_MINI --model R1-VL-7B --verbose
For more evaluation options, please refer to VLMEvalKit.
We conduct experiments with two powerful baseline models, including Qwen2-VL-2B and Qwen2-VL-7B. The Main Results comparing the R1-VL models with other state-of-the-art models across several widely-adopted benchmarks are shown in the figure below. All the experiments are conducted on 4 H100-80GB GPUs.
We appreciate your citations if you find our paper related and useful to your research!
@article{zhang2025r1,
title={R1-VL: Learning to Reason with Multimodal Large Language Models via Step-wise Group Relative Policy Optimization},
author={Zhang, Jingyi and Huang, Jiaxing and Yao, Huanjin and Liu, Shunyu and Zhang, Xikun and Lu, Shijian and Tao, Dacheng},
journal={arXiv preprint arXiv:2503.12937},
year={2025}
}