Li Wang


2024

pdf
Solving General Natural-Language-Description Optimization Problems with Large Language Models
Jihai Zhang | Wei Wang | Siyan Guo | Li Wang | Fangquan Lin | Cheng Yang | Wotao Yin
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)

Optimization problems seek to find the best solution to an objective under a set of constraints, and have been widely investigated in real-world applications. Modeling and solving optimization problems in a specific domain typically require a combination of domain knowledge, mathematical skills, and programming ability, making it difficult for general users and even domain professionals. In this paper, we propose a novel framework called OptLLM that augments LLMs with external solvers. Specifically, OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results for decision-making. In addition, OptLLM supports multi-round dialogues to gradually refine the modeling and solving of optimization problems. To illustrate the effectiveness of OptLLM, we provide tutorials on three typical optimization applications and conduct experiments on both prompt-based GPT models and a fine-tuned Qwen model using a large-scale self-developed optimization dataset. Experimental results show that OptLLM works with various LLMs, and the fine-tuned model achieves an accuracy boost compared to the prompt-based models. Some features of OptLLM framework have been available for trial since June 2023 (https://opt.alibabacloud.com/chat or https://opt.aliyun.com/chat).

2023

pdf
Interventional Rationalization
Linan Yue | Qi Liu | Li Wang | Yanqing An | Yichao Du | Zhenya Huang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Selective rationalizations improve the explainability of neural networks by selecting a subsequence of the input (i.e., rationales) to explain the prediction results. Although existing methods have achieved promising results, they still suffer from adopting the spurious correlations in data (aka., shortcuts) to compose rationales and make predictions. Inspired by the causal theory, in this paper, we develop an interventional rationalization (Inter-RAT) to discover the causal rationales. Specifically, we first analyse the causalities among the input, rationales and results with a structural causal model. Then, we discover spurious correlations between the input and rationales, and between rationales and results, respectively, by identifying the confounder in the causalities. Next, based on the backdoor adjustment, we propose a causal intervention method to remove the spurious correlations between input and rationales. Further, we discuss reasons why spurious correlations between the selected rationales and results exist by analysing the limitations of the sparsity constraint in the rationalization, and employ the causal intervention method to remove these correlations. Extensive experimental results on three real-world datasets clearly validate the effectiveness of our proposed method. The source code of Inter-RAT is available at https://github.com/yuelinan/Codes-of-Inter-RAT.

2013

pdf
How Noisy Social Media Text, How Diffrnt Social Media Sources?
Timothy Baldwin | Paul Cook | Marco Lui | Andrew MacKinlay | Li Wang
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Recovering Casing and Punctuation using Conditional Random Fields
Marco Lui | Li Wang
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

2012

pdf
The Utility of Discourse Structure in Identifying Resolved Threads in Technical User Forums
Li Wang | Su Nam Kim | Timothy Baldwin
Proceedings of COLING 2012

2011

pdf
Predicting Thread Linking Structure by Lexical Chaining
Li Wang | Diana McCarthy | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2011

pdf bib
Predicting Thread Discourse Structure over Technical Web Forums
Li Wang | Marco Lui | Su Nam Kim | Joakim Nivre | Timothy Baldwin
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf
Thread-level Analysis over Technical User Forum Data
Li Wang | Su Nam Kim | Timothy Baldwin
Proceedings of the Australasian Language Technology Association Workshop 2010

pdf
Intelligent Linux Information Access by Data Mining: the ILIAD Project
Timothy Baldwin | David Martinez | Richard Penman | Su Nam Kim | Marco Lui | Li Wang | Andrew MacKinlay
Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics in a World of Social Media

pdf
Tagging and Linking Web Forum Posts
Su Nam Kim | Li Wang | Timothy Baldwin
Proceedings of the Fourteenth Conference on Computational Natural Language Learning