Lambert Mathias


2021

pdf bib
Findings of the WOAH 5 Shared Task on Fine Grained Hateful Memes Detection
Lambert Mathias | Shaoliang Nie | Aida Mostafazadeh Davani | Douwe Kiela | Vinodkumar Prabhakaran | Bertie Vidgen | Zeerak Waseem
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)

We present the results and main findings of the shared task at WOAH 5 on hateful memes detection. The task include two subtasks relating to distinct challenges in the fine-grained detection of hateful memes: (1) the protected category attacked by the meme and (2) the attack type. 3 teams submitted system description papers. This shared task builds on the hateful memes detection task created by Facebook AI Research in 2020.

2019

pdf bib
Improving Long Distance Slot Carryover in Spoken Dialogue Systems
Tongfei Chen | Chetan Naik | Hua He | Pushpendre Rastogi | Lambert Mathias
Proceedings of the First Workshop on NLP for Conversational AI

Tracking the state of the conversation is a central component in task-oriented spoken dialogue systems. One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn. Previous work on the slot carryover task used models that made independent decisions for each slot. A close analysis of the results show that this approach results in poor performance over longer context dialogues. In this paper, we propose to jointly model the slots. We propose two neural network architectures, one based on pointer networks that incorporate slot ordering information, and the other based on transformer networks that uses self attention mechanism to model the slot interdependencies. Our experiments on an internal dialogue benchmark dataset and on the public DSTC2 dataset demonstrate that our proposed models are able to resolve longer distance slot references and are able to achieve competitive performance.

2018

pdf bib
The Alexa Meaning Representation Language
Thomas Kollar | Danielle Berry | Lauren Stuart | Karolina Owczarzak | Tagyoung Chung | Lambert Mathias | Michael Kayser | Bradford Snow | Spyros Matsoukas
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

This paper introduces a meaning representation for spoken language understanding. The Alexa meaning representation language (AMRL), unlike previous approaches, which factor spoken utterances into domains, provides a common representation for how people communicate in spoken language. AMRL is a rooted graph, links to a large-scale ontology, supports cross-domain queries, fine-grained types, complex utterances and composition. A spoken language dataset has been collected for Alexa, which contains ∼20k examples across eight domains. A version of this meaning representation was released to developers at a trade show in 2016.

2017

pdf bib
Transfer Learning for Neural Semantic Parsing
Xing Fan | Emilio Monti | Lambert Mathias | Markus Dreyer
Proceedings of the 2nd Workshop on Representation Learning for NLP

The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence model and compare their performance with the independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to the target task with smaller labeled data. We see an absolute accuracy gain ranging from 1.0% to 4.4% in in our in-house data set and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.