Abeer Marzouk


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2024

pdf bib
LexiconLadies at FIGNEWS 2024 Shared Task: Identifying Keywords for Bias Annotation Guidelines of Facebook News Headlines on the Israel-Palestine 2023 War
Yousra El-Ghawi | Abeer Marzouk | Aya Khamis
Proceedings of the Second Arabic Natural Language Processing Conference

News bias is difficult for humans to identify, but even more so for machines. This is largely due to the lack of linguistically appropriate annotated datasets suitable for use by classifier algorithms. The FIGNEWS Subtask 1: Bias Annotation involved classifying bias through manually annotated 1800 headlines from social media. Our proposed guidelines investigated which combinations of keywords available for classification, across sentence and token levels, may be used to detect possible bias in a conflict where neutrality is highly undesirable. Much of the headlines’ percentage required contextual knowledge of events to identify criteria that matched biased or targeted language. The final annotation guidelines paved the way for a theoretical system which uses keyword and hashtag significance to classify major instances of bias. Minor instances with bias undertones or clickbait may require advanced machine learning methods which learn context through scraping user engagements on social media.