Shuffle-R1: Efficient RL framework for Multimodal Large Language Models via Data-centric Dynamic Shuffle

Linghao Zhu1, Yiran Guan1, Dingkang Liang1, Jianzhong Ju2, Zhenbo Luo2, Bin Qin2, Jian Luan2, Yuliang Liu1, Xiang Bai1

1Huazhong University of Science and Technology
2MiLM Plus, Xiaomi Inc.

🔗 Links

Release: Our official code is undergoing internal review and is expected to be officially open-sourced within 1~2 weeks. We will open source model checkpoints, training data, training/inference/evaluation scripts and everything! Please stay tuned!

📄 Abstract

Reinforcement learning (RL) has emerged as an effective post-training paradigm for enhancing the reasoning capabilities of multimodal large language model (MLLM). However, current RL pipelines often suffer from training inefficiencies caused by two underexplored issues: Advantage Collapsing, where most advantages in a batch concentrate near zero, and Rollout Silencing, where the proportion of rollouts contributing non-zero gradients diminishes over time. These issues lead to suboptimal gradient updates and hinder long-term learning efficiency. To address these issues, we propose Shuffle-R1, a simple yet principled framework that improves RL fine-tuning efficiency by dynamically restructuring trajectory sampling and batch composition. It introduces (1) Pairwise Trajectory Sampling, which selects high-contrast trajectories with large advantages to improve gradient signal quality, and (2) Advantage-based Trajectory Shuffle, which increases exposure of valuable rollouts through informed batch reshuffling. Experiments across multiple reasoning benchmarks show that our framework consistently outperforms strong RL baselines with minimal overhead. These results highlight the importance of data-centric adaptations for more efficient RL training in MLLM.

TL;DR: We propose Shuffle-R1, a simple and effective RL post-training framework for MLLM that significantly improves RL training efficiency and model performance.
⚙ Method
Framework Overview
Fig. 1: Framework Overview of Shuffle-R1

Pairwise Trajectory Sampling (PTS)

The goal of PTS is to mitigate the Advantage Collapsing issue, where most trajectory advantages are close to zero, by organizing candidate trajectories into contrastive pairs to enhance the signal quality of high-advantage trajectories.

Given a query $q$ and rollout size of $2N$, the rollout trajectories group is denoted as $O = \{ o_i \}_{i=1}^{2N}$. We denote the sorted advantages set as:
$$A_s = \{ \hat{A}_{(i)} \}_{i=1}^{2N}, \quad \text{where } \hat{A}_{(1)} \geq \hat{A}_{(2)} \geq \cdots \geq \hat{A}_{(2N)}.$$
Based on this ordering, we can construct the pairing set as:
$$P = \{ (o_{(i)}, o_{(2N - i + 1)}) \}_{i=1}^{N}.$$
We apply a simple top-k sampling strategy to select a subset of valid pairs $P_v$ from $P$.
$$P_v = \{ (o_{(i)}, o_{(2N - i + 1)}) \}_{i=1}^{M}, \quad M = \alpha N,\ \alpha \in (0,1).$$

Without significantly increasing the computational cost, PTS selects pairs of tracks with strong contrast and rich gradient information from a larger exploration space to improve the effectiveness of policy gradients and improve data utilization.

Advantage-based Batch Shuffle (ABS)

To mitigate the Rollout Silencing issue, we introduce ABS, which reorders the training batch dynamically based on the advantages of the paired trajectories.

Denote the original batch as:
$$B = \{ p_i^g=(o_{i,1}^g, \hat{A}_{i,1}^g, o_{i,2}^g, \hat{A}_{i,2}^g, q^g) \}_{i=1\ldots M,\ g=1\ldots G},$$
The batch size is $M\times G$ and will be divided into $K$ smaller batches in standard updates.
For each trajectory pair $p_j$, we assign a importance weight:
$$W(p_j) = |\hat{A}_{j,1}| + |\hat{A}_{j,2}|.$$
Then, the sampling probability of $p_j$ is:
$$\Phi(p_j) = \frac{W(p_j)}{\sum_{k=1}^{|P_v|} W(p_k)}.$$
Based on the sampling probability, we perform $S$ sub-sampling from the original batch $B$ with each sample having a capacity of $T$ pairs (i.e., $2T$ trajectories):
$$B_s = \{ p_{s,t} \}_{t=1}^{T}, \quad \text{s.t. } p_{s,t} \neq p_{s,t'} , \forall\, t \neq t'.$$
We merge all sub-sampling batches to obtain the shuffled batch:
$$B' = \bigcup_{s=1}^S B_s, \quad \text{and } |B'| = |B|,\ \text{i.e., } S \times T = MG.$$

By introducing ABS, we transform the batch distribution to a "soft-prioritized" structure, maintaining diversity while exposing high-value samples multiple times, thus enhancing data utilization and mitigating Rollout Silencing.

📊 Performance

Performance Overview

Table 1: Performance of Shuffle-R1.

Model MathVerse MathVision MathVista(mini) WeMath(loose) HallusionBench ChartQA Average
Qwen2.5-VL-3B 34.8 21.9 58.4 51.7 59.8 73.1 49.9
Qwen2.5-VL-7B 42.6 25.8 67.4 63.5 65.2 79.8 57.4
Shuffle-R1-3B 44.2 26.8 70.4 66.5 69.2 79.9 59.5
Shuffle-R1-7B 53.9 30.0 77.0 72.3 71.0 84.1 64.7

All models are evaluated under CoT prompt.

Comparison with GRPO

Comparison with GRPO
Fig. 2: Training dynamics of Shuffle-R1.

Left: Training accuracy of Shuffle-R1 compared with GRPO.

Middle: Validation accuracy of Shuffle-R1 compared with GRPO. Our framework achieves superior performance against GRPO with only half of the trainin steps.

Right: Rollouts ratio with nonzero gradient. Our framework maintains very high ratio throughout the training process, showing better data efficiency compared with GRPO.

🙏 Acknowledgment

Our work benefits from the following open-source projects:

📚 Citation
@misc{zhu2025shuffler1,
    title={Shuffle-R1: Efficient RL framework for Multimodal Large Language Models via Data-centric Dynamic Shuffle}, 
    author={Linghao Zhu, Yiran Guan, Dingkang Liang, Jianzhong Ju, Zhenbo Luo, Bin Qin, Jian Luan, Yuliang Liu, Xiang Bai},
    year={2025},
    eprint={2508.05612},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2508.05612}, 
}
    

👁️ Total visits 0 times | 🧑 Visitors 0