Training Reinforcement Learning Agents to React to an Ambush for Military Simulations

Authors

  • Timothy Aris US Army, DEVCOM SC, STTC
  • Volkan Ustun University of Southern California Institute for Creative Technology https://orcid.org/0000-0002-7090-4086
  • Rajay Kumar University of Southern California Institute for Creative Technology

DOI:

https://doi.org/10.32473/flairs.37.1.135578

Keywords:

Reinforcement Learning, Ray casting, Waypoints, Behavior representation, Navmesh, Military Simulation

Abstract

There is a need for realistic Opposing Forces (OPFOR)
behavior in military training simulations. Current training
simulations generally only have simple, non-adaptive
behaviors, requiring human instructors to play the role of
OPFOR in any complicated scenario. This poster addresses
this need by focusing on a specific scenario: training
reinforcement learning agents to react to an ambush. It
proposes a novel way to check for occlusion algorithmically.
It shows vector fields showing the agent’s actions through
the course of a training run. It shows that a single agent
switching between multiple goals is possible, at least in a
simplified environment. Such an approach could reduce the
need to develop different agents for different scenarios.
Finally, it shows a competent agent trained on a simplified
React to Ambush scenario, demonstrating the plausibility of
a scaled-up version.

Downloads

Published

13-05-2024

How to Cite

Aris, T., Ustun, V., & Kumar, R. (2024). Training Reinforcement Learning Agents to React to an Ambush for Military Simulations. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135578