A Comparison of Behavior Cloning Methods in Developing Interactive Opposing-Force Agents

Authors

  • Logan Lebanoff Soar Technology, Inc.
  • Nicholas Paul Soar Technology, Inc.
  • Christopher Ballinger Soar Technology, Inc.
  • Patrick Sherry Soar Technology, Inc.
  • Gavin Carpenter Soar Technology, Inc.
  • Charles Newton Soar Technology, Inc.

DOI:

https://doi.org/10.32473/flairs.36.133299

Keywords:

reinforcement learning, behavior cloning, imitation learning, simulation, games

Abstract

Modern modeling and simulation environments, such as commercial games or military training systems, frequently demand interactive agents that exhibit realistic and responsive behavior in accordance with a predetermined specification, such as a storyboard or military tactics document.
Traditional methods for creating agents, such as state machines or behavior trees, necessitate a significant amount of effort for developing state representations and transition processes through manual knowledge engineering. On the other hand, newer techniques for behavior generation, such as deep reinforcement learning, require a vast amount of training data (centuries in many cases), and there is no guarantee that the generated behavior will align with intended objectives and courses of action. This paper examines the application of behavior cloning approaches in designing interactive agents. In our approach, users start by defining desired behavior through straightforward methods such as state machine models or behavior trees. Behavior cloning methods are then used to transform ground-truth trajectory data sampled from these models into differentiable policies that are further refined through engagement with interactive game environments. This method results in improvements in training results when compared on dimensions of task performance and stability of training.

Downloads

Published

08-05-2023

How to Cite

Lebanoff, L., Paul, N., Ballinger, C., Sherry, P., Carpenter, G., & Newton, C. (2023). A Comparison of Behavior Cloning Methods in Developing Interactive Opposing-Force Agents. The International FLAIRS Conference Proceedings, 36(1). https://doi.org/10.32473/flairs.36.133299

Issue

Section

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia