Learning to Take Cover on Geo-Specific Terrains via Reinforcement Learning

Autor/innen

  • Timothy Aris University of Southern California Institute for Creative Technology
  • Volkan Ustun University of Southern California Institute for Creative Technology
  • Rajay Kumar University of Southern California Institute for Creative Technology

DOI:

https://doi.org/10.32473/flairs.v35i.130871

Schlagworte:

Reinforcement Learning, Behavior representation, Curriculum Learning, Ray casting

Abstract

This paper presents a reinforcement learning model designed to learn how to take cover on  geo-specific terrains, an essential behavior component for military training simulations. Training of the models is performed on the Rapid Integration and Development Environment (RIDE) leveraging the Unity ML-Agents framework. We show that increasing the number of novel situations the agent is exposed to increases the performance on the test set. In addition,  the trained models possess some ability to generalize across terrains, and it can also take less time to retrain an agent to a new terrain, if that terrain has a level of complexity less than or equal to the terrain it was previously trained on.

Downloads

Veröffentlicht

2022-05-04

Zitationsvorschlag

Aris, T., Ustun, V., & Kumar, R. (2022). Learning to Take Cover on Geo-Specific Terrains via Reinforcement Learning. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130871

Ausgabe

Rubrik

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia