Knowledge Distillation for a Domain-Adaptive Visual Recommender System
DOI:
https://doi.org/10.32473/flairs.37.1.135533Resumo
In the last few years large-scale foundational models have shown remarkable performance in computer vision tasks. However, deploying such models in a production environment poses a significant challenge, because of their computational requirements. Furthermore, these models typically produce generic results and they often need some sort of external input. The concept of knowledge distillation provides a promising solution to this problem.
In this paper, we focus on the challenges faced in the application of knowledge distillation techniques in the task of augmenting a dataset for object detection used in a commercial Visual Recommender System called VISIDEA; the goal consists in detecting items in various e-commerce websites, encompassing a wide range of custom product categories. We discuss a possible solution to problems such as label duplication, erroneous labeling and lack of robustness to prompting, by considering examples in the field of fashion apparel recommendation.
Downloads
Publicado
Como Citar
Edição
Seção
Licença
Copyright (c) 2024 Alessandro Abluton, Luigi Portinale
Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial 4.0 International License.