Leveraging Graph Networks to Model Environments in Reinforcement Learning

Authors

DOI:

https://doi.org/10.32473/flairs.36.133118

Keywords:

Reinforcement Learning, Synthetic Characters, Graph Neural Networks

Abstract

This paper proposes leveraging graph neural networks (GNNs) to model an agent’s environment to construct superior policy networks in reinforcement learning (RL). To this end, we explore the effects of different combinations of GNNs and graph network pooling functions on policy performance. We also run experiments at different levels of problem complexity, which affect how easily we expect an agent to learn an optimal policy and therefore show whether or not graph networks are effective at various problem complexity levels. The efficacy of our approach is shown via experimentation in a partially-observable, non-stationary environment that parallels the highly-practical scenario of a military training exercise with human trainees, where the learning goal is to become the best sparring partner possible for human trainees. Our results present that our models can generate better-performing sparring partners by employing GNNs, as demonstrated by these experiments in the proof-of-concept environment. We also explore our model’s applicability in Multi-Agent RL scenarios. Our code is available online at https://github.com/Derposoft/GNNsAsEnvs.

Downloads

Published

08-05-2023

How to Cite

Chadalapaka, V., Ustun, V., & Liu, L. (2023). Leveraging Graph Networks to Model Environments in Reinforcement Learning. The International FLAIRS Conference Proceedings, 36(1). https://doi.org/10.32473/flairs.36.133118

Issue

Section

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia