Tianyu Pan


2025

pdf bib
LLM4RE: A Data-centric Feasibility Study for Relation Extraction
Anushka Swarup | Tianyu Pan | Ronald Wilson | Avanti Bhandarkar | Damon Woodard
Proceedings of the 31st International Conference on Computational Linguistics

Relation Extraction (RE) is a multi-task process that is a crucial part of all information extraction pipelines. With the introduction of the generative language models, Large Language Models (LLMs) have showcased significant performance boosts for complex natural language processing and understanding tasks. Recent research in RE has also started incorporating these advanced machines in their pipelines. However, the full extent of the LLM’s potential for extracting relations remains unknown. Consequently, this study aims to conduct the first feasibility analysis to explore the viability of LLMs for RE by investigating their robustness to various complex RE scenarios stemming from data-specific characteristics. By conducting an exhaustive analysis of five state-of-the-art LLMs backed by more than 2100 experiments, this study posits that LLMs are not robust enough to tackle complex data characteristics for RE, and additional research efforts focusing on investigating their behaviors at extracting relationships are needed. The source code for the evaluation pipeline can be found at https://aaig.ece.ufl.edu/projects/relation-extraction .

pdf bib
From Syntax to Semantics: Evaluating the Impact of Linguistic Structures on LLM-Based Information Extraction
Anushka Swarup | Avanti Bhandarkar | Ronald Wilson | Tianyu Pan | Damon Woodard
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)

Large Language Models (LLMs) have brought significant breakthroughs across all areas of Natural Language Processing (NLP), including Information Extraction (IE). However, knowledge gaps remain regarding their effectiveness in extracting entity-relation triplets, i.e. Joint Relation Extraction (JRE). JRE has been a key operation in creating knowledge bases that can be used to enhance Retrieval Augmented Generation (RAG) systems. Prior work highlights low-quality triplets generated by LLMs. Thus, this work investigates the impact of incorporating linguistic structures, such as constituency and dependency trees and semantic role labeling, to enhance the quality of the extracted triplets. The findings suggest that incorporating specific structural information enhances the uniqueness and topical relevance of the triplets, particularly in scenarios where multiple relationships are present.