Bias Adaptive Statistical Inference Learning Agents for Learning from Human Feedback
DOI:
https://doi.org/10.32473/flairs.v34i1.128471Keywords:
Interactive Machine Learning, IML, Interactive Reinforcement Learning, IRL, Bias, Human factors, Bayesian, TAMER, TetrisAbstract
We present a novel technique for learning behaviors from ahuman provided feedback signal that is distorted by system-atic bias. Our technique, which we refer to as BASIL, modelsthe feedback signal as being separable into a heuristic evalu-ation of the utility of an action and a bias value that is drawnfrom a parametric distribution probabilistically, where thedistribution is defined by unknown parameters. We presentthe general form of the technique as well as a specific algo-rithm for integrating the technique with the TAMER algo-rithm for bias values drawn from a normal distribution. Wetest our algorithm against standard TAMER in the domain ofTetris using a synthetic oracle that provides feedback undervarying levels of distortion. We find our algorithm can learnvery quickly under bias distortions that entirely stymie thelearning of classic TAMER.