A Closer Look at Invalid Action Masking in Policy Gradient Algorithms

Autores/as

  • Shengyi Huang Drexel University
  • Santiago Ontañón Drexel University

DOI:

https://doi.org/10.32473/flairs.v35i.130584

Palabras clave:

Reinforcement Learning, Deep Learning, Deep Reinforcement Learning, Real-time Strategy Games, Implementation Details, Invalid Action Masking

Resumen

In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action distribution predicted by the learned policy is likely to be invalid according to the game rules (e.g., walking into a wall). The usual approach to deal with this problem in policy gradient algorithms is to “mask out” invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we 1) show theoretical justification for such a practice, 2) empirically demonstrate its importance as the space of invalid actions grows, and 3) provide further insights by evaluating different action masking regimes, such as removing masking after an agent has been trained using masking.

Descargas

Publicado

2022-05-04

Cómo citar

Huang, S., & Ontañón, S. (2022). A Closer Look at Invalid Action Masking in Policy Gradient Algorithms. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130584

Número

Sección

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia