Shaurya Rohatgi
2023
The ACL OCL Corpus: Advancing Open Science in Computational Linguistics
Shaurya Rohatgi
|
Yanxia Qin
|
Benjamin Aw
|
Niranjana Unnithan
|
Min-Yen Kan
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We present ACL OCL, a scholarly corpus derived from the ACL Anthology to assist Open scientific research in the Computational Linguistics domain. Integrating and enhancing the previous versions of the ACL Anthology, the ACL OCL contributes metadata, PDF files, citation graphs and additional structured full texts with sections, figures, and links to a large knowledge resource (Semantic Scholar). The ACL OCL spans seven decades, containing 73K papers, alongside 210K figures. We spotlight how ACL OCL applies to observe trends in computational linguistics. By detecting paper topics with a supervised neural model, we note that interest in “Syntax: Tagging, Chunking and Parsing” is waning and “Natural Language Generation” is resurging. Our dataset is available from HuggingFace (https://huggingface.co/datasets/WINGNUS/ACL-OCL).
Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation
Jason Lucas
|
Adaku Uchendu
|
Michiharu Yamashita
|
Jooyoung Lee
|
Shaurya Rohatgi
|
Dongwon Lee
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Recent ubiquity and disruptive impacts of large language models (LLMs) have raised concerns about their potential to be misused (*.i.e, generating large-scale harmful and misleading content*). To combat this emerging risk of LLMs, we propose a novel “***Fighting Fire with Fire***” (F3) strategy that harnesses modern LLMs’ generative and emergent reasoning capabilities to counter human-written and LLM-generated disinformation. First, we leverage GPT-3.5-turbo to synthesize authentic and deceptive LLM-generated content through paraphrase-based and perturbation-based prefix-style prompts, respectively. Second, we apply zero-shot in-context semantic reasoning techniques with cloze-style prompts to discern genuine from deceptive posts and news articles. In our extensive experiments, we observe GPT-3.5-turbo’s zero-shot superiority for both in-distribution and out-of-distribution datasets, where GPT-3.5-turbo consistently achieved accuracy at 68-72%, unlike the decline observed in previous customized and fine-tuned disinformation detectors. Our codebase and dataset are available at https://github.com/mickeymst/F3.
Search
Co-authors
- Yanxia Qin 1
- Benjamin Aw 1
- Niranjana Unnithan 1
- Min-Yen Kan 1
- Jason Lucas 1
- show all...