Lexing Xie
2025
MoVa: Towards Generalizable Classification of Human Morals and Values
Ziyu Chen
|
Junfei Sun
|
Chenxi Li
|
Tuan Dung Nguyen
|
Jing Yao
|
Xiaoyuan Yi
|
Xing Xie
|
Chenhao Tan
|
Lexing Xie
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Identifying human morals and values embedded in language is essential to empirical studies of communication. However, researchers often face substantial difficulty navigating the diversity of theoretical frameworks and data available for their analysis. Here, we contribute MoVa, a well-documented suite of resources for generalizable classification of human morals and values, consisting of (1) 16 labeled datasets and benchmarking results from four theoretically-grounded frameworks; (2) a lightweight LLM prompting strategy that outperforms fine-tuned models across multiple domains and frameworks; and (3) a new application that helps evaluate psychological surveys. In practice, we specifically recommend a classification strategy, all@once, that scores all related concepts simultaneously, resembling the well-known multi-label classifier chain. The data and methods in MoVa can facilitate many fine-grained interpretations of human and machine communication, with potential implications for the alignment of machine behavior.
2020
SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy
Umanga Bista
|
Alexander Mathews
|
Aditya Menon
|
Lexing Xie
Findings of the Association for Computational Linguistics: EMNLP 2020
Most work on multi-document summarization has focused on generic summarization of information present in each individual document set. However, the under-explored setting of update summarization, where the goal is to identify the new information present in each set, is of equal practical interest (e.g., presenting readers with updates on an evolving news topic). In this work, we present SupMMD, a novel technique for generic and update summarization based on the maximum mean discrepancy from kernel two-sample testing. SupMMD combines both supervised learning for salience and unsupervised learning for coverage and diversity. Further, we adapt multiple kernel learning to make use of similarity across multiple information sources (e.g., text features and knowledge based concepts). We show the efficacy of SupMMD in both generic and update summarization tasks by meeting or exceeding the current state-of-the-art on the DUC-2004 and TAC-2009 datasets.
Search
Fix author
Co-authors
- Umanga Bista 1
- Ziyu Chen 1
- Chenxi Li 1
- Alexander Mathews 1
- Aditya Menon 1
- show all...