
The RTS games are one of the incredibly challenging tasks for the AI because of their large action space, long-term strategic planning, and multi-agent cooperation requirements. Conventional deep reinforcement learning (DRL) methods are effective but may have limitations in terms of scalability, computational grandiosity, generalization capabilities, and interpretable analyses. We present a deep reinforcement learning framework that overcomes these hurdles by boosting multi-agent coordination, sample efficiency, and employing explainable AI (XAI) techniques to improve the model interpretability in rigorous decision-making. In contrast to these existing methods, which rely on large amounts of computation and are severely limited in long-term strategic adaptation, our design features hierarchical learning, curriculum to shape rewards across adjudicated proxy games, and Bayesian uncertainty to promote work in action areas consistent with changing dynamics relative to game mechanics; thereby facilitating rapid adaptability to new situations in RTSs. We also propose dynamic action pruning methods to alleviate redundant action space representation, as well as enhancing the advantage of real-time decision-making. We validate our proposed model over diverse RTS environments, and it not only generalizes better but trains faster while having a richer strategic depth than existing state-of-the-art DRL models. This study closes the gap between theoretical advancements and practical RTS applications, introducing an efficient, interpretable and scalable solution for RTS game strategies driven by AI.
Authors: E. Purushotham, Felcy Judith P, V. Thrimurthulu, Dokku Durga Bhavani, Mohit Tiwari, Thirumani Thangam
DOI: https://doi.org/10.1051/itmconf/20257601006
Publish Year: 2025