Maria Mestre
2020
Discovering Biased News Articles Leveraging Multiple Human Annotations
Konstantina Lazaridou
|
Alexander Löser
|
Maria Mestre
|
Felix Naumann
Proceedings of the Twelfth Language Resources and Evaluation Conference
Unbiased and fair reporting is an integral part of ethical journalism. Yet, political propaganda and one-sided views can be found in the news and can cause distrust in media. Both accidental and deliberate political bias affect the readers and shape their views. We contribute to a trustworthy media ecosystem by automatically identifying politically biased news articles. We introduce novel corpora annotated by two communities, i.e., domain experts and crowd workers, and we also consider automatic article labels inferred by the newspapers’ ideologies. Our goal is to compare domain experts to crowd workers and also to prove that media bias can be detected automatically. We classify news articles with a neural network and we also improve our performance in a self-supervised manner.
2019
SemEval-2019 Task 4: Hyperpartisan News Detection
Johannes Kiesel
|
Maria Mestre
|
Rishabh Shukla
|
Emmanuel Vincent
|
Payam Adineh
|
David Corney
|
Benno Stein
|
Martin Potthast
Proceedings of the 13th International Workshop on Semantic Evaluation
Hyperpartisan news is news that takes an extreme left-wing or right-wing standpoint. If one is able to reliably compute this meta information, news articles may be automatically tagged, this way encouraging or discouraging readers to consume the text. It is an open question how successfully hyperpartisan news detection can be automated, and the goal of this SemEval task was to shed light on the state of the art. We developed new resources for this purpose, including a manually labeled dataset with 1,273 articles, and a second dataset with 754,000 articles, labeled via distant supervision. The interest of the research community in our task exceeded all our expectations: The datasets were downloaded about 1,000 times, 322 teams registered, of which 184 configured a virtual machine on our shared task cloud service TIRA, of which in turn 42 teams submitted a valid run. The best team achieved an accuracy of 0.822 on a balanced sample (yes : no hyperpartisan) drawn from the manually tagged corpus; an ensemble of the submitted systems increased the accuracy by 0.048.
Search
Co-authors
- Johannes Kiesel 1
- Rishabh Shukla 1
- Emmanuel Vincent 1
- Payam Adineh 1
- David Corney 1
- show all...