Knowledge Distillation for a Domain-Adaptive Visual Recommender System
##plugins.pubIds.doi.readerDisplayName##:
https://doi.org/10.32473/flairs.37.1.135533摘要
In the last few years large-scale foundational models have shown remarkable performance in computer vision tasks. However, deploying such models in a production environment poses a significant challenge, because of their computational requirements. Furthermore, these models typically produce generic results and they often need some sort of external input. The concept of knowledge distillation provides a promising solution to this problem.
In this paper, we focus on the challenges faced in the application of knowledge distillation techniques in the task of augmenting a dataset for object detection used in a commercial Visual Recommender System called VISIDEA; the goal consists in detecting items in various e-commerce websites, encompassing a wide range of custom product categories. We discuss a possible solution to problems such as label duplication, erroneous labeling and lack of robustness to prompting, by considering examples in the field of fashion apparel recommendation.
##submission.downloads##
已出版
##submission.howToCite##
期
栏目
##submission.license##
##submission.copyrightStatement##
##submission.license.cc.by-nc4.footer##