Leveraging Graph Networks to Model Environments in Reinforcement Learning

作者

##plugins.pubIds.doi.readerDisplayName##:

https://doi.org/10.32473/flairs.36.133118

关键词:

Reinforcement Learning, Synthetic Characters, Graph Neural Networks

摘要

This paper proposes leveraging graph neural networks (GNNs) to model an agent’s environment to construct superior policy networks in reinforcement learning (RL). To this end, we explore the effects of different combinations of GNNs and graph network pooling functions on policy performance. We also run experiments at different levels of problem complexity, which affect how easily we expect an agent to learn an optimal policy and therefore show whether or not graph networks are effective at various problem complexity levels. The efficacy of our approach is shown via experimentation in a partially-observable, non-stationary environment that parallels the highly-practical scenario of a military training exercise with human trainees, where the learning goal is to become the best sparring partner possible for human trainees. Our results present that our models can generate better-performing sparring partners by employing GNNs, as demonstrated by these experiments in the proof-of-concept environment. We also explore our model’s applicability in Multi-Agent RL scenarios. Our code is available online at https://github.com/Derposoft/GNNsAsEnvs.

##submission.downloads##

已出版

2023-05-08

栏目

Special Track: Artificial Intelligence in Games, Serious Games, and Multimedia