Luis Guzman-Nateras


2022

pdf
Event Detection for Suicide Understanding
Luis Guzman-Nateras | Viet Lai | Amir Pouran Ben Veyseh | Franck Dernoncourt | Thien Nguyen
Findings of the Association for Computational Linguistics: NAACL 2022

Suicide is a serious problem in every society. Understanding life events of a potential patient is essential for successful suicide-risk assessment and prevention. In this work, we focus on the Event Detection (ED) task to identify event trigger words of suicide-related events in public posts of discussion forums. In particular, we introduce SuicideED: a new dataset for the ED task that features seven suicidal event types to comprehensively capture suicide actions and ideation, and general risk and protective factors. Our experiments with current state-of-the-art ED systems suggest that this domain poses meaningful challenges as there is significant room for improvement of ED models. We will release SuicideED to support future research in this important area.

pdf
Cross-Lingual Event Detection via Optimized Adversarial Training
Luis Guzman-Nateras | Minh Van Nguyen | Thien Nguyen
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

In this work, we focus on Cross-Lingual Event Detection where a model is trained on data from a source language but its performance is evaluated on data from a second, target, language. Most recent works in this area have harnessed the language-invariant qualities displayed by pre-trained Multi-lingual Language Models. Their performance, however, reveals there is room for improvement as the cross-lingual setting entails particular challenges. We employ Adversarial Language Adaptation to train a Language Discriminator to discern between the source and target languages using unlabeled data. The discriminator is trained in an adversarial manner so that the encoder learns to produce refined, language-invariant representations that lead to improved performance. More importantly, we optimize the adversarial training process by only presenting the discriminator with the most informative samples. We base our intuition about what makes a sample informative on two disparate metrics: sample similarity and event presence. Thus, we propose leveraging Optimal Transport as a solution to naturally combine these two distinct information sources into the selection process. Extensive experiments on 8 different language pairs, using 4 languages from unrelated families, show the flexibility and effectiveness of our model that achieves state-of-the-art results.