Tianhui Zhang


2025

pdf bib
BRIGHTER: BRIdging the Gap in Human-Annotated Textual Emotion Recognition Datasets for 28 Languages
Shamsuddeen Hassan Muhammad | Nedjma Ousidhoum | Idris Abdulmumin | Jan Philip Wahle | Terry Ruas | Meriem Beloucif | Christine de Kock | Nirmal Surange | Daniela Teodorescu | Ibrahim Said Ahmad | David Ifeoluwa Adelani | Alham Fikri Aji | Felermino D. M. A. Ali | Ilseyar Alimova | Vladimir Araujo | Nikolay Babakov | Naomi Baes | Ana-Maria Bucur | Andiswa Bukula | Guanqun Cao | Rodrigo Tufiño | Rendi Chevi | Chiamaka Ijeoma Chukwuneke | Alexandra Ciobotaru | Daryna Dementieva | Murja Sani Gadanya | Robert Geislinger | Bela Gipp | Oumaima Hourrane | Oana Ignat | Falalu Ibrahim Lawan | Rooweither Mabuya | Rahmad Mahendra | Vukosi Marivate | Alexander Panchenko | Andrew Piper | Charles Henrique Porto Ferreira | Vitaly Protasov | Samuel Rutunda | Manish Shrivastava | Aura Cristina Udrea | Lilian Diana Awuor Wanzare | Sophie Wu | Florian Valentin Wunderlich | Hanif Muhammad Zhafran | Tianhui Zhang | Yi Zhou | Saif M. Mohammad
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

People worldwide use language in subtle and complex ways to express emotions. Although emotion recognition–an umbrella term for several NLP tasks–impacts various applications within NLP and beyond, most work in this area has focused on high-resource languages. This has led to significant disparities in research efforts and proposed solutions, particularly for under-resourced languages, which often lack high-quality annotated datasets.In this paper, we present BRIGHTER–a collection of multi-labeled, emotion-annotated datasets in 28 different languages and across several domains. BRIGHTER primarily covers low-resource languages from Africa, Asia, Eastern Europe, and Latin America, with instances labeled by fluent speakers. We highlight the challenges related to the data collection and annotation processes, and then report experimental results for monolingual and crosslingual multi-label emotion identification, as well as emotion intensity recognition. We analyse the variability in performance across languages and text domains, both with and without the use of LLMs, and show that the BRIGHTER datasets represent a meaningful step towards addressing the gap in text-based emotion recognition.

pdf bib
Evaluating the Evaluation of Diversity in Commonsense Generation
Tianhui Zhang | Bei Peng | Danushka Bollegala
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In commonsense generation, given a set of input concepts, a model must generate a response that is not only commonsense bearing, but also capturing multiple diverse viewpoints. Numerous evaluation metrics based on form- and content-level overlap have been proposed in prior work for evaluating the diversity of a commonsense generation model. However, it remains unclear as to which metrics are best suited for evaluating the diversity in commonsense generation. To address this gap, we conduct a systematic meta-evaluation of diversity metrics for commonsense generation. We find that form-based diversity metrics tend to consistently overestimate the diversity in sentence sets, where even randomly generated sentences are assigned overly high diversity scores. We then use an Large Language Model (LLM) to create a novel dataset annotated for the diversity of sentences generated for a commonsense generation task, and use it to conduct a meta-evaluation of the existing diversity evaluation metrics. Our experimental results show that content-based diversity evaluation metrics consistently outperform the form-based counterparts, showing high correlations with the LLM-based ratings. We recommend that future work on commonsense generation should use content-based metrics for evaluating the diversity of their outputs.

2024

pdf bib
Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning
Tianhui Zhang | Bei Peng | Danushka Bollegala
Findings of the Association for Computational Linguistics: EMNLP 2024

Generative Commonsense Reasoning (GCR) requires a model to reason about a situation using commonsense knowledge, while generating coherent sentences. Although the quality of the generated sentences is crucial, the diversity of the generation is equally important because it reflects the model’s ability to use a range of commonsense knowledge facts. Large Language Models (LLMs) have shown proficiency in enhancing the generation quality across various tasks through in-context learning (ICL) using given examples without the need for any fine-tuning. However, the diversity aspect in LLM outputs has not been systematically studied before. To address this, we propose a simple method that diversifies the LLM generations, while preserving their quality. Experimental results on three benchmark GCR datasets show that our method achieves an ideal balance between the quality and diversity. Moreover, the sentences generated by our proposed method can be used as training data to improve diversity in existing commonsense generators.

2023

pdf bib
Learning to Predict Concept Ordering for Common Sense Generation
Tianhui Zhang | Danushka Bollegala | Bei Peng
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)