2025
pdf
bib
abs
TeleAI at SemEval-2025 Task 11: Bridging the Gap in Text-Based Emotion Detection with Prompt Engineering and Data Augmentation
Shiquan Wang
|
Mengxiang Li
|
Shengxiong Peng
|
Fang Yu
|
Zhongjiang He
|
Shuangyong Song
|
Yongxiang Li
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents the approach we employed in SemEval-2025 Task 11: “Bridging the Gap in Text-Based Emotion Detection.” The core objective of this shared task is emotion perception, focusing on determining the emotion the speaker is likely expressing when uttering a sentence or short text fragment, as perceived by the majority. In this task, we applied a prompt optimization strategy based on in-context learning, combined with data augmentation and ensemble voting techniques, to significantly enhance the model’s performance. Through these optimizations, the model demonstrated improved accuracy and stability in emotion detection. Ultimately, in both Track A (Multi-label Emotion Detection) and Track B (Emotion Intensity Prediction), our approach achieved top-3 rankings across multiple languages, showcasing the effectiveness and cross-lingual adaptability of our method.
pdf
bib
abs
TeleAI at SemEval-2025 Task 8: Advancing Table Reasoning Framework with Large Language Models
Sishi Xiong
|
Mengxiang Li
|
Dakai Wang
|
Yu Zhao
|
Jie Zhang
|
Changzai Pan
|
Haowei He
|
Xiangyu Li
|
Wenhan Chang
|
Zhongjiang He
|
Shuangyong Song
|
Yongxiang Li
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
The paper presents our system developed for SemEval-2025 Task 8, which focuses on table question answering (TQA). The TQA tasks face challenges due to the characteristics of real-world tabular data, such as large size, incomplete column semantics, and entity ambiguity. To address these issues, we propose a large language model (LLM)-powered and programming-based framework, named Flow-of-Table-Reasoning. We introduce the table schema integrating verbalized structure and semantics for query decomposition and programming, enabling a holistic understanding of tables and the ability to process large-size tables. We design a multi-step schema linking plan to derive a focused table schema that retains only information relevant to the query, aiming to eliminate ambiguity and reduce hallucinations. Furthermore, we incorporate reasoning workflow into an iterative thinking architecture, allowing incremental cycles of thinking, reasoning and reflection. Our system achieves first place on both TQA and Lite TQA subtasks.
2024
pdf
bib
abs
Sentence Segmentation and Punctuation for Ancient Books Based on Supervised In-context Training
Shiquan Wang
|
Weiwei Fu
|
Mengxiang Li
|
Zhongjiang He
|
Yongxiang Li
|
Ruiyu Fang
|
Li Guan
|
Shuangyong Song
Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024
This paper describes the participation of team “TeleAI” in the third International Chinese Ancient Chinese Language Information Processing Evaluation (EvalHan24). The competition comprises a joint task of sentence segmentation and punctuation, categorized into open and closed tracks based on the models and data used. In the final evaluation, our system achieved significantly better results than the baseline. Specifically, in the closed-track sentence segmentation task, we obtained an F1 score of 0.8885, while in the sentence punctuation task, we achieved an F1 score of 0.7129.