Reinforcement Learning Agents with Generalizing Behavior
DOI:
https://doi.org/10.32473/flairs.37.1.135591Abstract
We explore the generality of Reinforcement Learning (RL) agents on unseen environment configurations by analyzing
the behavior of an agent tasked with traversing a graph based environment from a starting position to a goal position. We find that training on a single task is likely to result in inflexible policies that do not respond well to change. Instead, training on a wide variety of scenarios offers the best chance of developing a flexible policy, at the expense of increased training difficulty.
Downloads
Veröffentlicht
Zitationsvorschlag
Ausgabe
Rubrik
Lizenz
Copyright (c) 2024 Sarah Kitchen, Reid Sawtell, Anthony Chavez, Timothy Aris
Dieses Werk steht unter der Lizenz Creative Commons Namensnennung - Nicht-kommerziell 4.0 International.