An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach
Keywords:Recommender Systems, Collaborative Filtering, Factorization Methods, Latent Dirichlet Allocation, Interpretable Methods
With the increasing use of AI and ML-based systems, interpretability is becoming an increasingly important issue to ensure user trust and safety. This also applies to the area of recommender systems, where methods based on matrix factorization (MF) are among the most popular methods for collaborative filtering tasks with implicit feedback. Despite their simplicity, the latent factors of users and items lack interpretability in the case of the effective, unconstrained MF-based methods. In this work, we propose an extended latent Dirichlet Allocation model (LDAext) that has interpretable parameters such as user cohorts of item preferences and the affiliation of a user with different cohorts. We prove a theorem on how to transform the factors of an unconstrained MF model into the parameters of LDAext. Using this theoretical connection, we train an MF model on different real-world data sets, transform the latent factors into the parameters of LDAext and test their interpretation in several experiments for plausibility. Our experiments confirm the interpretability of the transformed parameters and thus demonstrate the usefulness of our proposed approach.
How to Cite
Copyright (c) 2022 Florian Wilhelm, Marisa Mohr, Lien Michiels
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.