An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach
DOI :
https://doi.org/10.32473/flairs.v35i.130567Mots-clés :
Recommender Systems, Collaborative Filtering, Factorization Methods, Latent Dirichlet Allocation, Interpretable MethodsRésumé
With the increasing use of AI and ML-based systems, interpretability is becoming an increasingly important issue to ensure user trust and safety. This also applies to the area of recommender systems, where methods based on matrix factorization (MF) are among the most popular methods for collaborative filtering tasks with implicit feedback. Despite their simplicity, the latent factors of users and items lack interpretability in the case of the effective, unconstrained MF-based methods. In this work, we propose an extended latent Dirichlet Allocation model (LDAext) that has interpretable parameters such as user cohorts of item preferences and the affiliation of a user with different cohorts. We prove a theorem on how to transform the factors of an unconstrained MF model into the parameters of LDAext. Using this theoretical connection, we train an MF model on different real-world data sets, transform the latent factors into the parameters of LDAext and test their interpretation in several experiments for plausibility. Our experiments confirm the interpretability of the transformed parameters and thus demonstrate the usefulness of our proposed approach.
Téléchargements
Publié-e
Comment citer
Numéro
Rubrique
Licence
(c) Tous droits réservés Florian Wilhelm, Marisa Mohr, Lien Michiels 2022
Cette œuvre est sous licence Creative Commons Attribution - Pas d'Utilisation Commerciale 4.0 International.