Jairo Cugliari


2024

pdf
Exploring Semantics in Pretrained Language Model Attention
Frédéric Charpentier | Jairo Cugliari | Adrien Guille
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

Abstract Meaning Representations (AMRs) encode the semantics of sentences in the form of graphs. Vertices represent instances of concepts, and labeled edges represent semantic relations between those instances. Language models (LMs) operate by computing weights of edges of per layer complete graphs whose vertices are words in a sentence or a whole paragraph. In this work, we investigate the ability of the attention heads of two LMs, RoBERTa and GPT2, to detect the semantic relations encoded in an AMR. This is an attempt to show semantic capabilities of those models without finetuning. To do so, we apply both unsupervised and supervised learning techniques.

2021

pdf
Monitoring geometrical properties of word embeddings for detecting the emergence of new topics.
Clément Christophe | Julien Velcin | Jairo Cugliari | Manel Boumghar | Philippe Suignard
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Slow emerging topic detection is a task between event detection, where we aggregate behaviors of different words on short period of time, and language evolution, where we monitor their long term evolution. In this work, we tackle the problem of early detection of slowly emerging new topics. To this end, we gather evidence of weak signals at the word level. We propose to monitor the behavior of words representation in an embedding space and use one of its geometrical properties to characterize the emergence of topics. As evaluation is typically hard for this kind of task, we present a framework for quantitative evaluation and show positive results that outperform state-of-the-art methods. Our method is evaluated on two public datasets of press and scientific articles.