Malicious or Benign? Towards Effective Content Moderation for Children's Videos




content moderation, online video platforms, children's health, annotation tools, video analysis


Online video platforms receive hundreds of hours of uploads every minute, making manual content moderation impossible.
Unfortunately, the most vulnerable consumers of malicious video content are children from ages 1-5 whose attention is easily captured by bursts of color and sound. Scammers attempting to monetize their content may craft malicious children's videos that are superficially similar to educational videos, but include scary and disgusting characters, violent motions, loud music, and disturbing noises. Prominent video hosting platforms like YouTube have taken measures to mitigate malicious content on their platform, but these videos often go undetected by current content moderation tools that are focused on removing pornographic or copyrighted content. This paper introduces our toolkit (Malicious or Benign) for promoting research on automated content moderation of children's videos. We present 1) a customizable annotation tool for videos, 2) a new dataset with difficult to detect test cases of malicious content and 3) a benchmark suite of state-of-the-art video classification models.




How to Cite

Ahmed, S. H., Khan, M. J., Qaisar, H. M. U., & Sukthankar, G. (2023). Malicious or Benign? Towards Effective Content Moderation for Children’s Videos. The International FLAIRS Conference Proceedings, 36(1).



Special Track: Security, Privacy and Ethics in AI