This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
SeemabLatif
Fixing paper assignments
Please select all papers that do not belong to this person.
Indicate below which author they should be assigned to.
Classifying argumentative fallacies in political discourse is challenging due to their subtle, persuasive nature across text and speech. In our MM-ArgFallacy Shared Task submission, Team NUST investigates uni-modal (text/audio) and multi-modal (text+audio) setups using pretrained models—RoBERTa for text and Whisper for audio. To tackle severe class imbalance, we introduce Prompt-Guided Few-Shot Augmentation (PG-FSA) to generate synthetic samples for underrepresented fallacies. We further propose a late fusion architecture combining linguistic and paralinguistic cues, enhanced with balancing techniques like SMOTE and Focal Loss. Our approach achieves top performance across modalities, ranking 1st in text-only and multi-modal tracks, and 3rd in audio-only, on the official leaderboard. These results underscore the effectiveness of targeted augmentation and modular fusion in multi-modal fallacy classification.
In this work, we present our system, which ranked second in the CRAC 2025 Shared Task on Multilingual Coreference Resolution (LLM Track). For multilingual coreference resolution, our system mainly uses long-context large language models (LLMs) in a few-shot in-context learning setting. Among the various approaches we explored, few-shot prompting proved to be the most effective, particularly due to the complexity of the task and the availability of high-quality data with referential relationships provided as part of the competition. We employed Gemini 2.5 Pro, one of the best available closed-source long-context LLMs at the time of submission. Our system achieved a CoNLL F1 score of 61.74 on the mini-testset, demonstrating that performance improves significantly with the number of few-shot examples provided, thanks to the model’s extended context window. While this approach comes with trade-offs in terms of inference cost and response latency, it highlights the potential of long-context LLMs for tackling multilingual coreference without task-specific fine-tuning. Although direct comparisons with traditional supervised systems are not straightforward, our findings provide valuable insights and open avenues for future work, particularly in expanding support for low-resource languages.
Inconsistent naming of menu items across merchants presents a major challenge for businesses that rely on large-scale menu item catalogs. It hinders downstream tasks like pricing analysis, menu item deduplication, and recommendations. To address this, we propose the Cross-Platform Semantic Alignment Framework (CPSAF), a hybrid approach that integrates DBSCAN-based clustering with SIGMA (Semantic Item Grouping and Menu Abstraction), a Large Language Model based refinement module. SIGMA employs in-context learning with a large language model to generate generic menu item names and categories. We evaluate our framework on a proprietary dataset comprising over 700,000 unique menu items. Experiments involve tuning DBSCAN parameters and applying SIGMA to refine clusters. The performance is assessed using both structural metrics i.e. cluster count, coverage and semantic metrics i.e. intra and inter-cluster similarity along with manual qualitative inspection. CPSAF improves intra-cluster similarity from 0.88 to 0.98 and reduces singleton clusters by 33%, demonstrating its effectiveness in recovering soft semantic drift.
NUST Nova participates in RIRAG Shared Task, addressing two critical challenges: Task 1 involves retrieving relevant subsections from regulatory documents based on user queries, while Task 2 focuses on generating concise, contextually accurate answers using the retrieved information. We propose a Hybrid Retrieval Framework that combines graph-based retrieval, vector-based methods, and keyword matching BM25 to enhance relevance and precision in regulatory QA. Using score-based fusion and iterative refinement, the framework retrieves the top 10 relevant passages, which are then used by an LLM to generate accurate, context-aware answers. After empirical evaluation, we also conduct an error analysis to identify our framework’s limitations.
NUST Alpha participates in the Regulatory Information Retrieval and Answer Generation (RIRAG) shared task. We propose FusionRAG that combines OpenAI embeddings, BM25, FAISS, and Rank-Fusion to improve information retrieval and answer generation. We also explores multiple variants of our model to assess the impact of each component in overall performance. FusionRAG strength comes from our rank fusion and filter strategy. Rank fusion integrates semantic and lexical relevance scores to optimize retrieval accuracy and result diversity, and Filter mechanism remove irrelevant passages before answer generation. Our experiments demonstrate that FusionRAG offers a robust and scalable solution for automating the analysis of regulatory documents, improving compliance efficiency, and mitigating associated risks. We further conduct an error analysis to explore the limitations of our model’s performance.
NUST Omega participates in Regulatory Information Retrieval and Answer Generation (RIRAG) Shared Task. Regulatory documents poses unique challenges in retrieving and generating precise and relevant answers due to their inherent complexities. We explore the task by proposing a progressive retrieval pipeline and investigate its performance with multiple variants. Some variants include different embeddings to explore their effects on the retrieval score. Some variants examine the inclusion of keyword-driven query matching technique. After exploring such variations, we include topic modeling in our pipeline to investigate its impact on the performance. We also study the performance of various prompt techniques with our proposed pipeline. With empirical experiments, we find some strengths and limitations in the proposed pipeline. These findings will help the research community by offering valuable insights to make advancements in tackling this complex task.
This paper presents AfroEmo, a multilingual, multi label emotion classification system designed for SemEval 2025 Task 11, leveraging the Afro XLMR model. Our approach integrates adaptive pretraining on domain specific corpora followed by fine tuning on low resource languages. Through comprehensive exploratory data analysis, we assess label distribution and model performance across diverse linguistic settings. By incorporating perceived emotions, how emotions are interpreted rather than explicitly stated, we enhance emotion recognition capabilities in underrepresented languages. Experimental results demonstrate that our method achieves competitive performance particularly in Amharic, while addressing key challenges in low resource emotion detection.
In this paper, we present our methodology and findings from participating in the FIGNEWS 2024 shared task on annotating news fragments on the Gaza-Israel war for bias and propaganda detection. The task aimed to refine the FIGNEWS 2024 annotation guidelines and to contribute to the creation of a comprehensive dataset to advance research in this field. Our team employed a multi-faceted approach to ensure high accuracy in data annotations. Our results highlight key challenges in detecting bias and propaganda, such as the need for more comprehensive guidelines. Our team ranked first in all tracks for propaganda annotation. For Bias, the team stood in first place for the Guidelines and IAA tracks, and in second place for the Quantity and Consistency tracks.