Learning to Take Cover with Navigation-Based Waypoints via Reinforcement Learning

Authors

  • Timothy Aris U.S. Army Combat Capabilities Development Command – Soldier Center (DEVCOM SC) Simulation and Training Technology Center (STTC)
  • Volkan Ustun University of Southern California Institute for Creative Technology
  • Rajay Kumar University of Southern California Institute for Creative Technology

DOI:

https://doi.org/10.32473/flairs.36.133348

Keywords:

Reinforcement Learning, Ray casting, Waypoints, Behavior representation, Navmesh

Abstract

This paper presents a reinforcement learning model designed to learn how to take cover on geo-specific terrains, an essential behavior component for military training simulations. Training of the models is performed on the Rapid Integration and Development Environment (RIDE) leveraging the Unity ML-Agents framework. This work expands on previous work on raycast-based agents by increasing the number of enemies from one to three. We demonstrate an automated way of generating training and testing data within geo-specific terrains. We show that replacing the action space with a more abstracted, navmesh-based waypoint movement system can increase the generality and success rate of the models while providing similar results to our previous paper's results regarding retraining across terrains. We also comprehensively evaluate the differences between these and the previous models. Finally, we show that incorporating pixels into the model's input can increase performance at the cost of longer training times.

Downloads

Published

08-05-2023

How to Cite

Aris, T., Ustun, V., & Kumar, R. (2023). Learning to Take Cover with Navigation-Based Waypoints via Reinforcement Learning. The International FLAIRS Conference Proceedings, 36(1). https://doi.org/10.32473/flairs.36.133348

Issue

Section

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia