Learning to Take Cover on Geo-Specific Terrains via Reinforcement Learning
DOI :
https://doi.org/10.32473/flairs.v35i.130871Mots-clés :
Reinforcement Learning, Behavior representation, Curriculum Learning, Ray castingRésumé
This paper presents a reinforcement learning model designed to learn how to take cover on geo-specific terrains, an essential behavior component for military training simulations. Training of the models is performed on the Rapid Integration and Development Environment (RIDE) leveraging the Unity ML-Agents framework. We show that increasing the number of novel situations the agent is exposed to increases the performance on the test set. In addition, the trained models possess some ability to generalize across terrains, and it can also take less time to retrain an agent to a new terrain, if that terrain has a level of complexity less than or equal to the terrain it was previously trained on.
Téléchargements
Publié-e
Comment citer
Numéro
Rubrique
Licence
(c) Tous droits réservés Timothy Aris, Volkan Ustun, Rajay Kumar 2022
Cette œuvre est sous licence Creative Commons Attribution - Pas d'Utilisation Commerciale 4.0 International.