2024
pdf
Can political dogwhistles be predicted by distributional methods for analysis of lexical semantic change?
Max Boholm
|
Björn Rönnerstrand
|
Ellen Breitholtz
|
Robin Cooper
|
Elina Lindgren
|
Gregor Rettenegger
|
Asad Sayeed
Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change
2023
pdf
A surprisal oracle for active curriculum language modeling
Xudong Hong
|
Sharid Loáiciga
|
Asad Sayeed
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
pdf
abs
Political dogwhistles and community divergence in semantic change
Max Boholm
|
Asad Sayeed
Proceedings of the 4th Workshop on Computational Approaches to Historical Language Change
We test whether the development of political dogwhistles can be observed using language change measures; specifically, does the development of a “hidden” message in a dogwhistle show up as differences in semantic change between communities over time? We take Swedish-language dogwhistles related to the on-going immigration debate and measure differences over time in their rate of semantic change between two Swedish-language community forums, Flashback and Familjeliv, the former representing an in-group for understanding the “hidden” meaning of the dogwhistles. We find that multiple measures are sensitive enough to detect differences over time, in that the meaning changes in Flashback over the relevant time period but not in Familjeliv. We also examine the sensitivity of multiple modeling approaches to semantic change in the matter of community divergence.
pdf
abs
Visual Coherence Loss for Coherent and Visually Grounded Story Generation
Xudong Hong
|
Vera Demberg
|
Asad Sayeed
|
Qiankun Zheng
|
Bernt Schiele
Findings of the Association for Computational Linguistics: ACL 2023
Local coherence is essential for long-form text generation models. We identify two important aspects of local coherence within the visual storytelling task: (1) the model needs to represent re-occurrences of characters within the image sequence in order to mention them correctly in the story; (2) character representations should enable us to find instances of the same characters and distinguish different characters. In this paper, we propose a loss function inspired by a linguistic theory of coherence for self-supervised learning for image sequence representations. We further propose combining features from an object and a face detector to construct stronger character features. To evaluate input-output relevance that current reference-based metrics don’t measure, we propose a character matching metric to check whether the models generate referring expressions correctly for characters in input image sequences. Experiments on a visual story generation dataset show that our proposed features and loss function are effective for generating more coherent and visually grounded stories.
pdf
abs
Visually Grounded Story Generation Challenge
Xudong Hong
|
Khushboo Mehra
|
Asad Sayeed
|
Vera Demberg
Proceedings of the 16th International Natural Language Generation Conference: Generation Challenges
Recent large pre-trained models have achieved strong performance in multimodal language generation, which requires a joint effort of vision and language modeling. However, most previous generation tasks are based on single image input and produce short text descriptions that are not grounded on the input images. In this work, we propose a shared task on visually grounded story generation. The input is an image sequence, and the output is a story that is conditioned on the input images. This task is particularly challenging because: 1) the protagonists in the generated stories need to be grounded in the images and 2) the output story should be a coherent long-form text. We aim to advance the study of vision-based story generation by accepting submissions that propose new methods as well as new evaluation measures.
pdf
abs
Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences
Xudong Hong
|
Asad Sayeed
|
Khushboo Mehra
|
Vera Demberg
|
Bernt Schiele
Transactions of the Association for Computational Linguistics, Volume 11
Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movie shots, each including 5-10 images. The image sequences are aligned with a total of 12K stories which were collected via crowdsourcing given the image sequences and a set of grounded characters from the corresponding image sequence. Our new image sequence collection and filtering process has allowed us to obtain stories that are more coherent, diverse, and visually grounded compared to previous work. We also propose a character-based story generation model driven by coherence as a strong baseline. Evaluations show that our generated stories are more coherent, visually grounded, and diverse than stories generated with the current state-of-the-art model. Our code, image features, annotations and collected stories are available at
https://vwprompt.github.io/.
pdf
Visual Coherence Loss for Coherent and Visually Grounded Story Generation
Xudong Hong
|
Vera Demberg
|
Asad Sayeed
|
Qiankun Zheng
|
Bernt Schiele
Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)
2022
pdf
abs
Distributional properties of political dogwhistle representations in Swedish BERT
Niclas Hertzberg
|
Robin Cooper
|
Elina Lindgren
|
Björn Rönnerstrand
|
Gregor Rettenegger
|
Ellen Breitholtz
|
Asad Sayeed
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
“Dogwhistles” are expressions intended by the speaker have two messages: a socially-unacceptable “in-group” message understood by a subset of listeners, and a benign message intended for the out-group. We take the result of a word-replacement survey of the Swedish population intended to reveal how dogwhistles are understood, and we show that the difficulty of annotating dogwhistles is reflected in the separability in the space of a sentence-transformer Swedish BERT trained on general data.
pdf
abs
Where’s the Learning in Representation Learning for Compositional Semantics and the Case of Thematic Fit
Mughilan Muthupari
|
Samrat Halder
|
Asad Sayeed
|
Yuval Marton
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Observing that for certain NLP tasks, such as semantic role prediction or thematic fit estimation, random embeddings perform as well as pre-trained embeddings, we explore what settings allow for this, and examine where most of the learning is encoded: the word embeddings, the semantic role embeddings, or “the network”. We find nuanced answers, depending on the task and its relation to the training objective. We examine these representation learning aspects in multi-task learning, where role prediction and role-filling are supervised tasks, while several thematic fit tasks are outside the models’ direct supervision. We observe a non-monotonous relation between some tasks’ quality scores and the training data size. In order to better understand this observation, we analyze these results using easier, per-verb versions of these tasks.
pdf
bib
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
Simon Dobnik
|
Julian Grove
|
Asad Sayeed
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
pdf
abs
Thematic Fit Bits: Annotation Quality and Quantity Interplay for Event Participant Representation
Yuval Marton
|
Asad Sayeed
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Modeling thematic fit (a verb-argument compositional semantics task) currently requires a very large burden of labeled data. We take a linguistically machine-annotated large corpus and replace corpus layers with output from higher-quality, more modern taggers. We compare the old and new corpus versions’ impact on a verb-argument fit modeling task, using a high-performing neural approach. We discover that higher annotation quality dramatically reduces our data requirement while demonstrating better supervised predicate-argument classification. But in applying the model to psycholinguistic tasks outside the training objective, we see clear gains at scale, but only in one of two thematic fit estimation tasks, and no clear gains on the other. We also see that quality improves with training size, but perhaps plateauing or even declining in one task. Last, we tested the effect of role set size. All this suggests that the quality/quantity interplay is not all you need. We replicate previous studies while modifying certain role representation details and set a new state-of-the-art in event modeling, using a fraction of the data. We make the new corpus version public.
2021
pdf
abs
Semantic shift in social networks
Bill Noble
|
Asad Sayeed
|
Raquel Fernández
|
Staffan Larsson
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community’s social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant word-level effects of frequency and change in frequency, which replicate previous findings.
2020
pdf
abs
Exploiting Cross-Lingual Hints to Discover Event Pronouns
Sharid Loáiciga
|
Christian Hardmeier
|
Asad Sayeed
Proceedings of the Twelfth Language Resources and Evaluation Conference
Non-nominal co-reference is much less studied than nominal coreference, partly because of the lack of annotated corpora. We explore the possibility to exploit parallel multilingual corpora as a means of cheap supervision for the classification of three different readings of the English pronoun ‘it’: entity, event or pleonastic, from their translation in several languages. We found that the ‘event’ reading is not very frequent, but can be easily predicted provided that the construction used to translate the ‘it’ example is a pronoun as well. These cases, nevertheless, are not enough to generalize to other types of non-nominal reference.
pdf
abs
An Annotation Approach for Social and Referential Gaze in Dialogue
Vidya Somashekarappa
|
Christine Howes
|
Asad Sayeed
Proceedings of the Twelfth Language Resources and Evaluation Conference
This paper introduces an approach for annotating eye gaze considering both its social and the referential functions in multi-modal human-human dialogue. Detecting and interpreting the temporal patterns of gaze behavior cues is natural for humans and also mostly an unconscious process. However, these cues are difficult for conversational agents such as robots or avatars to process or generate. The key factor is to recognize these variants and carry out a successful conversation, as misinterpretation can lead to total failure of the given interaction. This paper introduces an annotation scheme for eye-gaze in human-human dyadic interactions that is intended to facilitate the learning of eye-gaze patterns in multi-modal natural dialogue.
pdf
abs
Diverse and Relevant Visual Storytelling with Scene Graph Embeddings
Xudong Hong
|
Rakshith Shetty
|
Asad Sayeed
|
Khushboo Mehra
|
Vera Demberg
|
Bernt Schiele
Proceedings of the 24th Conference on Computational Natural Language Learning
A problem in automatically generated stories for image sequences is that they use overly generic vocabulary and phrase structure and fail to match the distributional characteristics of human-generated text. We address this problem by introducing explicit representations for objects and their relations by extracting scene graphs from the images. Utilizing an embedding of this scene graph enables our model to more explicitly reason over objects and their relations during story generation, compared to the global features from an object classifier used in previous work. We apply metrics that account for the diversity of words and phrases of generated stories as well as for reference to narratively-salient image features and show that our approach outperforms previous systems. Our experiments also indicate that our models obtain competitive results on reference-based metrics.
pdf
abs
Building Sense Representations in Danish by Combining Word Embeddings with Lexical Resources
Ida Rørmann Olsen
|
Bolette Pedersen
|
Asad Sayeed
Proceedings of the 2020 Globalex Workshop on Linked Lexicography
Our aim is to identify suitable sense representations for NLP in Danish. We investigate sense inventories that correlate with human interpretations of word meaning and ambiguity as typically described in dictionaries and wordnets and that are well reflected distributionally as expressed in word embeddings. To this end, we study a number of highly ambiguous Danish nouns and examine the effectiveness of sense representations constructed by combining vectors from a distributional model with the information from a wordnet. We establish representations based on centroids obtained from wordnet synests and example sentences as well as representations established via are tested in a word sense disambiguation task. We conclude that the more information extracted from the wordnet entries (example sentence, definition, semantic relations) the more successful the sense representation vector.
2019
pdf
abs
Verb-Second Effect on Quantifier Scope Interpretation
Asad Sayeed
|
Matthias Lindemann
|
Vera Demberg
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Sentences like “Every child climbed a tree” have at least two interpretations depending on the precedence order of the universal quantifier and the indefinite. Previous experimental work explores the role that different mechanisms such as semantic reanalysis and world knowledge may have in enabling each interpretation. This paper discusses a web-based task that uses the verb-second characteristic of German main clauses to estimate the influence of word order variation over world knowledge.
pdf
abs
A Hybrid Model for Globally Coherent Story Generation
Fangzhou Zhai
|
Vera Demberg
|
Pavel Shkadzko
|
Wei Shi
|
Asad Sayeed
Proceedings of the Second Workshop on Storytelling
Automatically generating globally coherent stories is a challenging problem. Neural text generation models have been shown to perform well at generating fluent sentences from data, but they usually fail to keep track of the overall coherence of the story after a couple of sentences. Existing work that incorporates a text planning module succeeded in generating recipes and dialogues, but appears quite data-demanding. We propose a novel story generation approach that generates globally coherent stories from a fairly small corpus. The model exploits a symbolic text planning module to produce text plans, thus reducing the demand of data; a neural surface realization module then generates fluent text conditioned on the text plan. Human evaluation showed that our model outperforms various baselines by a wide margin and generates stories which are fluent as well as globally coherent.
2018
pdf
Rollenwechsel-English: a large-scale semantic role corpus
Asad Sayeed
|
Pavel Shkadzko
|
Vera Demberg
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
bib
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)
Asad Sayeed
|
Cassandra Jacobs
|
Tal Linzen
|
Marten van Schijndel
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)
pdf
bib
abs
Learning distributed event representations with a multi-task approach
Xudong Hong
|
Asad Sayeed
|
Vera Demberg
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Human world knowledge contains information about prototypical events and their participants and locations. In this paper, we train the first models using multi-task learning that can both predict missing event participants and also perform semantic role classification based on semantic plausibility. Our best-performing model is an improvement over the previous state-of-the-art on thematic fit modelling tasks. The event embeddings learned by the model can additionally be used effectively in an event similarity task, also outperforming the state-of-the-art.
2017
pdf
abs
Modeling Semantic Expectation: Using Script Knowledge for Referent Prediction
Ashutosh Modi
|
Ivan Titov
|
Vera Demberg
|
Asad Sayeed
|
Manfred Pinkal
Transactions of the Association for Computational Linguistics, Volume 5
Recent research in psycholinguistics has provided increasing evidence that humans predict upcoming content. Prediction also affects perception and might be a key to robustness in human language processing. In this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. linguistic knowledge jointly with common-sense knowledge in the form of scripts. We find that script knowledge significantly improves model estimates of human predictions. In a second study, we test the highly controversial hypothesis that predictability influences referring expression type but do not find evidence for such an effect.
pdf
bib
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
Ted Gibson
|
Tal Linzen
|
Asad Sayeed
|
Martin van Schijndel
|
William Schuler
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
2016
pdf
Event participant modelling with neural networks
Ottokar Tilk
|
Vera Demberg
|
Asad Sayeed
|
Dietrich Klakow
|
Stefan Thater
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
pdf
LingoTurk: managing crowdsourced tasks for psycholinguistics
Florian Pusse
|
Asad Sayeed
|
Vera Demberg
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations
pdf
Thematic fit evaluation: an aspect of selectional preferences
Asad Sayeed
|
Clayton Greenberg
|
Vera Demberg
Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP
pdf
Roleo: Visualising Thematic Fit Spaces on the Web
Asad Sayeed
|
Xudong Hong
|
Vera Demberg
Proceedings of ACL-2016 System Demonstrations
2015
pdf
Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering
Clayton Greenberg
|
Asad Sayeed
|
Vera Demberg
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
Verb polysemy and frequency effects in thematic fit modeling
Clayton Greenberg
|
Vera Demberg
|
Asad Sayeed
Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics
pdf
Vector-space calculation of semantic surprisal for predicting word pronunciation duration
Asad Sayeed
|
Stefan Fischer
|
Vera Demberg
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
2013
pdf
An opinion about opinions about opinions: subjectivity and the aggregate reader
Asad Sayeed
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
The semantic augmentation of a psycholinguistically-motivated syntactic formalism
Asad Sayeed
|
Vera Demberg
Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL)
2012
pdf
Grammatical structures for word-level sentiment detection
Asad Sayeed
|
Jordan Boyd-Graber
|
Bryan Rusk
|
Amy Weinberg
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
pdf
Incremental Neo-Davidsonian semantic construction for TAG
Asad Sayeed
|
Vera Demberg
Proceedings of the 11th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+11)
pdf
Syntactic Surprisal Affects Spoken Word Duration in Conversational Contexts
Vera Demberg
|
Asad Sayeed
|
Philip Gorinski
|
Nikolaos Engonopoulos
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
2011
pdf
Crowdsourcing syntactic relatedness judgements for opinion mining in the study of information technology adoption
Asad B. Sayeed
|
Bryan Rusk
|
Martin Petrov
|
Hieu C. Nguyen
|
Timothy J. Meyer
|
Amy Weinberg
Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities
2010
pdf
Crowdsourcing the evaluation of a domain-adapted named entity recognition system
Asad B. Sayeed
|
Timothy J. Meyer
|
Hieu C. Nguyen
|
Olivia Buzek
|
Amy Weinberg
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
“Expresses-an-opinion-about”: using corpus statistics in an information extraction approach to opinion mining
Asad B. Sayeed
|
Hieu C. Nguyen
|
Timothy J. Meyer
|
Amy Weinberg
Coling 2010: Posters
2009
pdf
Arabic Cross-Document Coreference Resolution
Asad Sayeed
|
Tamer Elsayed
|
Nikesh Garera
|
David Alexander
|
Tan Xu
|
Doug Oard
|
David Yarowsky
|
Christine Piatko
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
2005
pdf
Minimalist Parsing of Subjects Displaced from Embedded Clauses in Free Word Order Languages
Asad B. Sayeed
Proceedings of the ACL Student Research Workshop