An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach

Authors

  • Florian Wilhelm inovex GmbH
  • Marisa Mohr inovex GmbH
  • Lien Michiels University of Antwerp

DOI:

https://doi.org/10.32473/flairs.v35i.130567

Keywords:

Recommender Systems, Collaborative Filtering, Factorization Methods, Latent Dirichlet Allocation, Interpretable Methods

Abstract

With the increasing use of AI and ML-based systems, interpretability is becoming an increasingly important issue to ensure user trust and safety. This also applies to the area of recommender systems, where methods based on matrix factorization (MF) are among the most popular methods for collaborative filtering tasks with implicit feedback. Despite their simplicity, the latent factors of users and items lack interpretability in the case of the effective, unconstrained MF-based methods. In this work, we propose an extended latent Dirichlet Allocation model (LDAext) that has interpretable parameters such as user cohorts of item preferences and the affiliation of a user with different cohorts. We prove a theorem on how to transform the factors of an unconstrained MF model into the parameters of LDAext. Using this theoretical connection, we train an MF model on different real-world data sets, transform the latent factors into the parameters of LDAext and test their interpretation in several experiments for plausibility. Our experiments confirm the interpretability of the transformed parameters and thus demonstrate the usefulness of our proposed approach.

Downloads

Published

04-05-2022

How to Cite

Wilhelm, F., Mohr, M., & Michiels, L. (2022). An Interpretable Model for Collaborative Filtering Using an Extended Latent Dirichlet Allocation Approach. The International FLAIRS Conference Proceedings, 35. https://doi.org/10.32473/flairs.v35i.130567

Issue

Section

Main Track Proceedings