Leveraging Graph Networks to Model Environments in Reinforcement Learning
DOI:
https://doi.org/10.32473/flairs.36.133118Keywords:
Reinforcement Learning, Synthetic Characters, Graph Neural NetworksAbstract
This paper proposes leveraging graph neural networks (GNNs) to model an agent’s environment to construct superior policy networks in reinforcement learning (RL). To this end, we explore the effects of different combinations of GNNs and graph network pooling functions on policy performance. We also run experiments at different levels of problem complexity, which affect how easily we expect an agent to learn an optimal policy and therefore show whether or not graph networks are effective at various problem complexity levels. The efficacy of our approach is shown via experimentation in a partially-observable, non-stationary environment that parallels the highly-practical scenario of a military training exercise with human trainees, where the learning goal is to become the best sparring partner possible for human trainees. Our results present that our models can generate better-performing sparring partners by employing GNNs, as demonstrated by these experiments in the proof-of-concept environment. We also explore our model’s applicability in Multi-Agent RL scenarios. Our code is available online at https://github.com/Derposoft/GNNsAsEnvs.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Viswanath Chadalapaka, Volkan Ustun, Lixing Liu

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.