MSAP: Multi-Step Adversarial Perturbations on Recommender Systems Embeddings

Authors

  • Vito Walter Anelli Politecnico di Bari
  • Alejandro Bellogín Universidad Autonoma de Madrid
  • Yashar Deldjoo Politecnico di Bari
  • Tommaso Di Noia Politecnico di Bari
  • Felice Antonio Merra Politecnico di Bari

DOI:

https://doi.org/10.32473/flairs.v34i1.128443

Keywords:

Adversarial Machine Learning, Recommender Systems, Security

Abstract

Recommender systems (RSs) have attained exceptional performance in learning users' preferences and finding the most suitable products. Recent advances in adversarial machine learning (AML) in computer vision have raised interests in recommenders' security.
It has been demonstrated that widely adopted model-based recommenders, e.g., BPR-MF, are not robust to adversarial perturbations added on the learned parameters, e.g., users' embeddings, which can cause drastic reduction of recommendation accuracy.
However, the state-of-the-art adversarial method, named fast gradient sign method (FGSM), builds the perturbation with a single-step procedure. In this work, we extend the FGSM method proposing multi-step adversarial perturbation (MSAP) procedures to study the recommenders' robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF. Then, we assess the MSAP efficacy on a robustified version of BPR-MF, i.e., AMF. Finally, we analyze the variations of fairness measurements on each perturbed recommender. Code and data are available at https://github.com/sisinflab/MSAP.

Downloads

Published

2021-04-18

How to Cite

Anelli, V. W., Bellogín, A., Deldjoo, Y., Di Noia, T., & Merra, F. A. (2021). MSAP: Multi-Step Adversarial Perturbations on Recommender Systems Embeddings. The International FLAIRS Conference Proceedings, 34. https://doi.org/10.32473/flairs.v34i1.128443

Issue

Section

Main Track Proceedings