Analysis of Propaganda in Tweets From Politically Biased Sources
DOI:
https://doi.org/10.32473/flairs.38.1.138706Keywords:
propaganda detection, social media analysis, BERT, Large Language ModelsAbstract
News outlets are well known to have political associations, and many national outlets cultivate political biases to cater to different audiences. Journalists working for these news outlets have a big impact on the stories they cover. In this work, we present a methodology to analyze the role of journalists, affiliated with popular news outlets, in propagating their bias using some form of propaganda-like language. We introduce JMBX(Journalist Media Bias on X), a systematically collected and annotated dataset of 1874 tweets from Twitter (now known as X). These tweets are authored by popular journalists from 10 news outlets whose political biases range from extreme left to extreme right. We extract several insights from the data and conclude that journalists who are affiliated with outlets with extreme biases are more likely to use propaganda-like language in their writings compared to those who are affiliated with outlets with mild political leans. We compare eight different Large Language Models (LLM) by OpenAI and Google. We find that LLMs generally performs better when detecting propaganda in social media and news article compared to BERT-based model which is fine-tuned for propaganda detection. While the performance improvements of using large language models (LLMs) are significant, they come at a notable environmental cost. This study provides an analysis of the environmental impact, utilizing tools that estimate carbon emissions associated with LLM operations.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Vivek Sharma, Mohammad Mahdi Shokri, Sarah Ita Levitan, Elena Filatova, Shweta Jain

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.