TY - JOUR AU - Tasrin, Tasmia AU - Nahian, Md Sultan Al AU - Perera, Habarakadage AU - Harrison, Brent PY - 2021/04/18 Y2 - 2024/03/29 TI - Influencing Reinforcement Learning through Natural Language Guidance JF - The International FLAIRS Conference Proceedings JA - FLAIRS VL - 34 IS - 0 SE - Main Track Proceedings DO - 10.32473/flairs.v34i1.128472 UR - https://journals.flvc.org/FLAIRS/article/view/128472 SP - AB - <p>Interactive reinforcement learning (IRL) agents use human feedback or instruction to help them learn in complex environments. Often, this feedback comes in the form of a discrete signal that’s either positive or negative. While informative, this information can be difficult to generalize on its own. In this work, we explore how natural language advice can be used to provide a richer feedback signal to a reinforcement learning agent by extending policy shaping, a well-known IRL technique. <br>Usually policy shaping employs a human feedback policy to help an agent to learn more about how to achieve its goal. In our case, we replace this human feedback policy with policy generated based on natural language advice. We aim to inspect if the generated natural language reasoning provides support to a deep RL agent to decide its actions successfully in any given environment. So, we design our model with three networks: first one is the experience driven, next is the advice generator and third one is the advice driven. While the experience driven RL agent chooses its actions being influenced by the environmental reward, the advice driven neural network with generated feedback by the advice generator for any new state selects its actions to assist the RL agent to better policy shaping.</p> ER -