From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs

Recent advances in reasoning-centric large language models (LLMs) have expanded the scope of reinforcement learning (RL) beyond narrow, task-specific applications, enabling broader generalization and reasoning capabilities. However, this shift introduces significant challenges, particularly in scaling the training compute required for learning from experience. Unlike imitation learning through pre-training and fine-tuning, RL demands a more computationally […] The post From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs appeared first on MarkTechPost.

Jun 3, 2025 - 20:30
 0
From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs

Recent advances in reasoning-centric large language models (LLMs) have expanded the scope of reinforcement learning (RL) beyond narrow, task-specific applications, enabling broader generalization and reasoning capabilities. However, this shift introduces significant challenges, particularly in scaling the training compute required for learning from experience. Unlike imitation learning through pre-training and fine-tuning, RL demands a more computationally intensive approach. A central issue is the decline in policy entropy, which affects the balance between exploiting known strategies and exploring new ones. This exploitation-exploration trade-off is fundamental in RL, and controlling policy entropy has become critical to maintaining effective exploration during training.

Existing efforts address the exploration-exploitation trade-off in RL by utilizing policy entropy. Maximum entropy RL introduces a regularization term to the reward function, promoting uncertainty in action selection and encouraging broader exploration. While this technique has been widely adopted in conventional RL algorithms, its application to LLMs remains debated. Moreover, predictability in RL for LLMs is not explored. While neural scaling laws guide LLM development, similar predictive frameworks for RL training remain limited. Existing RL methods for LLMs with verifiable rewards show promise in reasoning improvements, but lack a deep understanding of their core mechanisms.

Researchers from Shanghai AI Laboratory, Tsinghua University, UIUC, Peking University, Nanjing University, and CUHK provide an approach to address the collapse of policy entropy in RL for reasoning-centric LLMs. They established a transformation equation, R = −a exp H + b, where H is entropy, R is downstream performance, and a and b are fitting coefficients. This empirical law strongly suggests that policy performance is traded from policy entropy, thus bottlenecked by its exhaustion. Researchers investigate entropy dynamics, and their derivation highlights that the change in policy entropy is driven by the covariance between action probability and the change in logits. They also proposed two techniques, namely Clip-Cov and KL-Cov, which clip and apply a KL penalty to tokens with high covariances, respectively.​

To investigate and validate the entropy collapse phenomenon in RL-tuned LLMs, researchers applied RL to LLMs on verifiable tasks, like math and coding, using an autoregressive generation setup where models produce token sequences based on input prompts. The study involves 11 widely adopted open-source models spanning four families: Qwen2.5, Mistral, LLaMA, and DeepSeek, with parameters ranging from 0.5B to 32 B. Evaluations are performed on eight public benchmarks, including MATH500, AIME 2024, AMC, and Eurus-2-RL-Code. Moreover, RL training follows the veRL framework in a zero-shot setting, utilizing algorithms like GRPO, REINFORCE++, and PRIME to optimize policy performance while observing entropy dynamics.

The proposed Clip-Cov and KL-Cov techniques were evaluated on the Qwen2.5 models using the DAPOMATH dataset for math tasks. These methods achieve non-trivial performance gains across all benchmarks. In comparison to the GRPO baseline, these methods improve performance by 2.0% on average for the 7B model and by 6.4% for the 32B model. For example, when the baseline’s entropy reaches a plateau, the KL-Cov method still sustains an entropy level over 10 times higher. The methods can maintain a higher level of entropy throughout the training. Moreover, the methods yield more substantial gains on the larger Qwen2.5-32B model, with improvements of 15.0% and 14.6% compared to GRPO on the most challenging benchmarks, AIME24 and AIME25, respectively. 

In conclusion, researchers have overcome the challenge of policy entropy collapse in RL for reasoning-centric LLMs. The findings highlight a trade-off between performance improvement and diminished exploration, which ultimately limits further gains. Through theoretical analysis and empirical validation, researchers identify entropy dynamics as a key bottleneck and propose two effective regularization strategies—Clip-Cov and KL-Cov to manage high-covariance tokens and sustain exploration. As RL emerges as a crucial axis for scaling beyond pre-training, addressing entropy collapse becomes essential. This work provides foundational insights into the role of entropy, guiding future efforts to scale RL toward more intelligent and capable language models.


Check out the Paper and GitHub Page . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

The post From Exploration Collapse to Predictable Limits: Shanghai AI Lab Proposes Entropy-Based Scaling Laws for Reinforcement Learning in LLMs appeared first on MarkTechPost.