Enhanced Multimodal Content Moderation of Children’s Videos using Audiovisual Fusion





multimodal fusion, content moderation, audio, CLIP


Due to the rise in video content creation targeted towards children, there is a need for robust content moderation schemes for video hosting platforms. A video that is visually benign may include audio content that is inappropriate for young children while being impossible to detect with a unimodal content moderation system. Popular video hosting platforms for children such as YouTube Kids still publish videos which contain audio content that is not conducive to a child's healthy behavioral and physical development. A robust classification of malicious videos requires audio representations in addition to video features. However, recent content moderation approaches rarely employ multimodal architectures that explicitly consider non-speech audio cues. To address this, we present an efficient adaptation of CLIP (Contrastive Language–Image Pre-training) that can leverage contextual audio cues for enhanced content moderation. We incorporate 1) the audio modality and 2) prompt learning, while keeping the backbone modules of each modality frozen. We conduct our experiments on a multimodal version of the MOB (Malicious or Benign) dataset in supervised and few-shot settings.




How to Cite

Ahmed, S. H., Khan, M. J., & Sukthankar, G. (2024). Enhanced Multimodal Content Moderation of Children’s Videos using Audiovisual Fusion. The International FLAIRS Conference Proceedings, 37(1). https://doi.org/10.32473/flairs.37.1.135563



Special Track: Security, Privacy, Trust and Ethics in AI