Husrev Sencar
2024
A Survey on Predicting the Factuality and the Bias of News Media
Preslav Nakov
|
Jisun An
|
Haewoon Kwak
|
Muhammad Arslan Manzoor
|
Zain Muhammad Mujahid
|
Husrev Sencar
Findings of the Association for Computational Linguistics ACL 2024
The present level of proliferation of fake, biased, and propagandistic content online has made it impossible to fact-check every single suspicious claim or article, either manually or automatically. An increasing number of scholars are focusing on a coarser granularity, aiming to profile entire news outlets, which allows fast identification of potential “fake news” by checking the reliability of their source. Source factuality is also an important element of systems for automatic fact-checking and “fake news” detection, as they need to assess the reliability of the evidence they retrieve online. Political bias detection, which in the Western political landscape is about predicting left-center-right bias, is an equally important topic, which has experienced a similar shift toward profiling entire news outlets. Moreover, there is a clear connection between the two, as highly biased media are less likely to be factual; yet, the two problems have been addressed separately. In this survey, we review the state of the art on media profiling for factuality and bias, arguing for the need to model them jointly. We also shed light on some of the major challenges for modeling bias and factuality jointly. We further discuss interesting recent advances in using different information sources and modalities, which go beyond the text of the articles the target news outlet has published. Finally, we discuss current challenges and outline future research directions.
2023
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik
|
Hassan Sajjad
|
Husrev Sencar
|
Safa Messaoud
|
Sanjay Chawla
Findings of the Association for Computational Linguistics: ACL 2023
Adversarial training is widely acknowledged as the most effective defense against adversarial attacks. However, it is also well established that achieving both robustness and generalization in adversarially trained models involves a trade-off. The goal of this work is to provide an in depth comparison of different approaches for adversarial training in language models. Specifically, we study the effect of pre-training data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of transformer-based language models. Our findings suggest that better robustness can be achieved by pre-training data augmentation or by training with input space perturbation. However, training with embedding space perturbation significantly improves generalization. A linguistic correlation analysis of neurons of the learned models reveal that the improved generalization is due to ‘more specialized’ neurons. To the best of our knowledge, this is the first work to carry out a deep qualitative analysis of different methods of generating adversarial examples in adversarial training of language models.
Search
Co-authors
- Enes Altinisik 1
- Hassan Sajjad 1
- Safa Messaoud 1
- Sanjay Chawla 1
- Preslav Nakov 1
- show all...