Weijie Xu


2024

pdf
HR-MultiWOZ: A Task Oriented Dialogue (TOD) Dataset for HR LLM Agent
Weijie Xu | Zicheng Huang | Wenxiang Hu | Xi Fang | Rajesh Cherukuri | Naumaan Nayyar | Lorenzo Malandri | Srinivasan Sengamedu
Proceedings of the First Workshop on Natural Language Processing for Human Resources (NLP4HR 2024)

Recent advancements in Large Language Models (LLMs) have been reshaping Natural Language Processing (NLP) task in several domains. Their use in the field of Human Resources (HR) has still room for expansions and could be beneficial for several time consuming tasks. Examples such as time-off submissions, medical claims filing, and access requests are noteworthy, but they are by no means the sole instances. However the aforementioned developments must grapple with the pivotal challenge of constructing a high-quality training dataset. On one hand, most conversation datasets are solving problems for customers not employees. On the other hand, gathering conversations with HR could raise privacy concerns. To solve it, we introduce HR-Multiwoz, a fully-labeled dataset of 550 conversations spanning 10 HR domains. Our work has the following contributions:(1) It is the first labeled open-sourced conversation dataset in the HR domain for NLP research. (2) It provides a detailed recipe for the data generation procedure along with data analysis and human evaluations. The data generation pipeline is transferrable and can be easily adapted for labeled conversation data generation in other domains. (3) The proposed data-collection pipeline is mostly based on LLMs with minimal human involvement for annotation, which is time and cost-efficient.

pdf
Synthesizing Conversations from Unlabeled Documents using Automatic Response Segmentation
Fanyou Wu | Weijie Xu | Chandan Reddy | Srinivasan Sengamedu
Findings of the Association for Computational Linguistics ACL 2024

In this study, we tackle the challenge of inadequate and costly training data that has hindered the development of conversational question answering (ConvQA) systems. Enterprises have a large corpus of diverse internal documents. Instead of relying on a searching engine, a more compelling approach for people to comprehend these documents is to create a dialogue system. In this paper, we propose a robust dialog synthesising method. We learn the segmentation of data for the dialog task instead of using segmenting at sentence boundaries. The synthetic dataset generated by our proposed method achieves superior quality when compared to WikiDialog, as assessed through machine and human evaluations. By employing our inpainted data for ConvQA retrieval system pre-training, we observed a notable improvement in performance across OR-QuAC benchmarks.

pdf bib
Syntactic dependency length shaped by strategic memory allocation
Weijie Xu | Richard Futrell
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP

Human processing of nonlocal syntactic dependencies requires the engagement of limited working memory for encoding, maintenance, and retrieval. This process creates an evolutionary pressure for language to be structured in a way that keeps the subparts of a dependency closer to each other, an efficiency principle termed dependency locality. The current study proposes that such a dependency locality pressure can be modulated by the surprisal of the antecedent, defined as the first part of a dependency, due to strategic allocation of working memory. In particular, antecedents with novel and unpredictable information are prioritized for memory encoding, receiving more robust representation against memory interference and decay, and thus are more capable of handling longer dependency length. We examine this claim by analyzing dependency corpora of 11 languages, with word surprisal generated from GPT-3 language model. In support of our hypothesis, we find evidence for a positive correlation between dependency length and the antecedent surprisal in most of the languages in our analyses. A closer look into the dependencies with core arguments shows that this correlation consistently holds for subject relations but not for object relations.

2023

pdf
vONTSS: vMF based semi-supervised neural topic modeling with optimal transport
Weijie Xu | Xiaoyu Jiang | Srinivasan Sengamedu Hanumantha Rao | Francis Iannacci | Jinjin Zhao
Findings of the Association for Computational Linguistics: ACL 2023

Recently, Neural Topic Models (NTM), inspired by variational autoencoders, have attracted a lot of research interest; however, these methods have limited applications in the real world due to the challenge of incorporating human knowledge. This work presents a semi-supervised neural topic modeling method, vONTSS, which uses von Mises-Fisher (vMF) based variational autoencoders and optimal transport. When a few keywords per topic are provided, vONTSS in the semi-supervised setting generates potential topics and optimizes topic-keyword quality and topic classification. Experiments show that vONTSS outperforms existing semi-supervised topic modeling methods in classification accuracy and diversity. vONTSS also supports unsupervised topic modeling. Quantitative and qualitative experiments show that vONTSS in the unsupervised setting outperforms recent NTMs on multiple aspects: vONTSS discovers highly clustered and coherent topics on benchmark datasets. It is also much faster than the state-of-the-art weakly supervised text classification method while achieving similar classification performance. We further prove the equivalence of optimal transport loss and cross-entropy loss at the global minimum.

pdf
DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM
Weijie Xu | Wenxiang Hu | Fanyou Wu | Srinivasan Sengamedu
Findings of the Association for Computational Linguistics: EMNLP 2023

In the burgeoning field of natural language processing, Neural Topic Models (NTMs) and Large Language Models (LLMs) have emerged as areas of significant research interest. Despite this, NTMs primarily utilize contextual embeddings from LLMs, which are not optimal for clustering or capable for topic generation. Our study addresses this gap by introducing a novel framework named Diffusion-Enhanced Topic Modeling using Encoder-Decoder-based LLMs (DeTiME). DeTiME leverages Encoder-Decoder-based LLMs to produce highly clusterable embeddings that could generate topics that exhibit both superior clusterability and enhanced semantic coherence compared to existing methods. Additionally, by exploiting the power of diffusion, our framework also provides the capability to generate content relevant to the identified topics. This dual functionality allows users to efficiently produce highly clustered topics and related content simultaneously. DeTiME’s potential extends to generating clustered embeddings as well. Notably, our proposed framework proves to be efficient to train and exhibits high adaptability, demonstrating its potential for a wide array of applications.

pdf
The Linearity of the Effect of Surprisal on Reading Times across Languages
Weijie Xu | Jason Chon | Tianran Liu | Richard Futrell
Findings of the Association for Computational Linguistics: EMNLP 2023

In psycholinguistics, surprisal theory posits that the amount of online processing effort expended by a human comprehender per word positively correlates with the surprisal of that word given its preceding context. In addition to this overall correlation, more importantly, the specific quantitative form taken by the processing effort as a function of surprisal offers insights into the underlying cognitive mechanisms of language processing. Focusing on English, previous studies have looked into the linearity of surprisal on reading times. Here, we extend the investigation by examining eyetracking corpora of seven languages: Danish, Dutch, English, German, Japanese, Mandarin, and Russian. We find evidence for superlinearity in some languages, but the results are highly sensitive to which language model is used to estimate surprisal.