Smart Sampling: Self-Attention and Bootstrapping for Improved Ensembled Q-Learning

Auteurs-es

  • Muhammad Junaid Khan University of Central Florida
  • Syed Hammad Ahmed University of Central Florida
  • Gita Sukthankar University of Central Florida https://orcid.org/0000-0002-6863-6609

DOI :

https://doi.org/10.32473/flairs.37.1.135567

Mots-clés :

sample efficient reinforcement learning, ensemble learning, bootstrapping, multi-head self attention

Résumé

We present a novel method aimed at enhancing the sample efficiency of ensemble Q learning. Our proposed approach integrates multi-head self-attention into the ensembled Q networks while bootstrapping the state-action pairs ingested by the ensemble. This not only results in performance improvements over the original REDQ and its variant DroQ, thereby enhancing Q predictions, but also effectively reduces both the average normalized bias and standard deviation of normalized bias within Q-function ensembles. Importantly, our method also performs well even in scenarios with a low update-to-data (UTD) ratio. Notably, the implementation of our proposed method is straightforward, requiring minimal modifications to the base model.

Téléchargements

Publié-e

2024-05-13

Comment citer

Khan, M. J., Ahmed, S. H., & Sukthankar, G. (2024). Smart Sampling: Self-Attention and Bootstrapping for Improved Ensembled Q-Learning. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135567

Numéro

Rubrique

Main Track Proceedings