Daniel McDuff
2023
Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models
Borui Wang
|
Qiuyuan Huang
|
Budhaditya Deb
|
Aaron Halfaker
|
Liqun Shao
|
Daniel McDuff
|
Ahmed Hassan Awadallah
|
Dragomir Radev
|
Jianfeng Gao
Findings of the Association for Computational Linguistics: ACL 2023
Natural language contains rich logical structures and logical information, and correctly detecting and accurately understanding these logical structures and information underlying natural language texts is very crucial for NLP models’ performance on many important NLU and NLG tasks. Existing pre-trained language models based on the transformer architecture mostly adopt a classical design for constructing their input embeddings that ignores the logical structures underlying natural language texts, thus limiting their ability to better capture and encode key logical information in the input sequences. To overcome such limitations, in this paper we first propose a novel approach to construct logic-aware input embeddings for transformer language models through a combination of logic detection, logic mapping and hierarchical logical projections, and then develop a corresponding new modeling paradigm that can upgrade existing transformer language models into logical transformers to boost their performance on different NLU and NLG tasks. Our empirical experiments on four important and challenging NLU and NLG tasks demonstrate that our proposed logical transformer language models can achieve superior performance over their baseline transformer models through a deeper understanding of the logical structures of texts.
2021
NICE: Neural Image Commenting with Empathy
Kezhen Chen
|
Qiuyuan Huang
|
Daniel McDuff
|
Xiang Gao
|
Hamid Palangi
|
Jianfeng Wang
|
Kenneth Forbus
|
Jianfeng Gao
Findings of the Association for Computational Linguistics: EMNLP 2021
Emotion and empathy are examples of human qualities lacking in many human-machine interactions. The goal of our work is to generate engaging dialogue grounded in a user-shared image with increased emotion and empathy while minimizing socially inappropriate or offensive outputs. We release the Neural Image Commenting with Empathy (NICE) dataset consisting of almost two million images and the corresponding human-generated comments, a set of human annotations, and baseline performance on a range of models. In-stead of relying on manually labeled emotions, we also use automatically generated linguistic representations as a source of weakly supervised labels. Based on these annotations, we define two different tasks for the NICE dataset. Then, we propose a novel pre-training model - Modeling Affect Generation for Image Comments (MAGIC) - which aims to generate comments for images, conditioned on linguistic representations that capture style and affect, and to help generate more empathetic, emotional, engaging and socially appropriate comments. Using this model we achieve state-of-the-art performance on one of our NICE tasks. The experiments show that the approach can generate more human-like and engaging image comments.
Search
Co-authors
- Qiuyuan Huang 2
- Jianfeng Gao 2
- Borui Wang 1
- Budhaditya Deb 1
- Aaron Halfaker 1
- show all...