Learning to Take Cover with Navigation-Based Waypoints via Reinforcement Learning

Autores

  • Timothy Aris U.S. Army Combat Capabilities Development Command – Soldier Center (DEVCOM SC) Simulation and Training Technology Center (STTC)
  • Volkan Ustun University of Southern California Institute for Creative Technology
  • Rajay Kumar University of Southern California Institute for Creative Technology

DOI:

https://doi.org/10.32473/flairs.36.133348

Palavras-chave:

Reinforcement Learning, Ray casting, Waypoints, Behavior representation, Navmesh

Resumo

This paper presents a reinforcement learning model designed to learn how to take cover on geo-specific terrains, an essential behavior component for military training simulations. Training of the models is performed on the Rapid Integration and Development Environment (RIDE) leveraging the Unity ML-Agents framework. This work expands on previous work on raycast-based agents by increasing the number of enemies from one to three. We demonstrate an automated way of generating training and testing data within geo-specific terrains. We show that replacing the action space with a more abstracted, navmesh-based waypoint movement system can increase the generality and success rate of the models while providing similar results to our previous paper's results regarding retraining across terrains. We also comprehensively evaluate the differences between these and the previous models. Finally, we show that incorporating pixels into the model's input can increase performance at the cost of longer training times.

Downloads

Publicado

2023-05-08

Como Citar

Aris, T., Ustun, V., & Kumar, R. (2023). Learning to Take Cover with Navigation-Based Waypoints via Reinforcement Learning. The International FLAIRS Conference Proceedings, 36(1). https://doi.org/10.32473/flairs.36.133348

Edição

Seção

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia