In November 2025, the TRANS Research Group published its latest study, “Evolutionary Reinforcement Learning with Late-start Evolution and Clustering Archive,” in IEEE Transactions on Evolutionary Computation (IF: 12), a TOP-tier journal in evolutionary computation. This work introduces a new evolutionary reinforcement learning (ERL) framework—LCERL—that significantly improves training stability, search efficiency, and generalization, marking a major step toward deployable and scalable ERL systems for real-world intelligent decision-making. The study presents three core technological breakthroughs:
- Enhanced Training Stability: A late-start strategy effectively avoids early-stage low-quality exploration data, ensuring more stable reinforcement learning.
- More Efficient Search: A double opposite proximal mutation operator generates high-quality candidate policies and adaptively adjusts mutation strength, enabling efficient search in high-dimensional spaces.
- Stronger Generalization: A phenotype-based clustering archive preserves behavioral diversity and provides diverse high-quality experiences, substantially improving generalization capability.
The first author of this paper is Qiuting Cai, a Master’s student at the School of Future Technology, South China University of Technology. The corresponding author is Associate Professor Yahui Jia. Collaborating authors include Professor Shiqi Ou from the TRANS Research Group and Professor Weineng Chen from the School of Computer Science and Engineering, South China University of Technology.
