Abstract
Prior research on affective event classification showed that exploiting weakly labeled data for training can improve model performance. In this work, we propose a simpler and more effective approach for generating training data by automatically acquiring and labeling affective events with Multiple View Co-prompting, which leverages two language model prompts that provide independent views of an event. The approach starts with a modest amount of gold data and prompts pre-trained language models to generate new events. Next, information about the probable affective polarity of each event is collected from two complementary language model prompts and jointly used to assign polarity labels. Experimental results on two datasets show that the newly acquired events improve a state-of-the-art affective event classifier. We also present analyses which show that using multiple views produces polarity labels of higher quality than either view on its own.- Anthology ID:
- 2023.findings-acl.199
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3189–3201
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.199
- DOI:
- 10.18653/v1/2023.findings-acl.199
- Cite (ACL):
- Yuan Zhuang and Ellen Riloff. 2023. Eliciting Affective Events from Language Models by Multiple View Co-prompting. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3189–3201, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Eliciting Affective Events from Language Models by Multiple View Co-prompting (Zhuang & Riloff, Findings 2023)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2023.findings-acl.199.pdf