Gemma Boleda

Also published as: Gemma Boleda Torrent


2024

pdf
Why do objects have many names? A study on word informativeness in language use and lexical systems
Eleonora Gualdoni | Gemma Boleda
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Human lexicons contain many different words that speakers can use to refer to the same object, e.g., *purple* or *magenta* for the same shade of color. On the one hand, studies on language use have explored how speakers adapt their referring expressions to successfully communicate in context, without focusing on properties of the lexical system. On the other hand, studies in language evolution have discussed how competing pressures for informativeness and simplicity shape lexical systems, without tackling in-context communication. We aim at bridging the gap between these traditions, and explore why a soft mapping between referents and words is a good solution for communication, by taking into account both in-context communication and the structure of the lexicon. We propose a simple measure of informativeness for words and lexical systems, grounded in a visual space, and analyze color naming data for English and Mandarin Chinese. We conclude that optimal lexical systems are those where multiple words can apply to the same referent, conveying different amounts of information. Such systems allow speakers to maximize communication accuracy and minimize the amount of information they convey when communicating about referents in contexts.

2023

pdf
The Impact of Familiarity on Naming Variation: A Study on Object Naming in Mandarin Chinese
Yunke He | Xixian Liao | Jialing Liang | Gemma Boleda
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)

Different speakers often produce different names for the same object or entity (e.g., “woman” vs. “tourist” for a female tourist). The reasons behind variation in naming are not well understood. We create a Language and Vision dataset for Mandarin Chinese that provides an average of 20 names for 1319 naturalistic images, and investigate how familiarity with a given kind of object relates to the degree of naming variation it triggers across subjects. We propose that familiarity influences naming variation in two competing ways: increasing familiarity can either expand vocabulary, leading to higher variation, or promote convergence on conventional names, thereby reducing variation. We find evidence for both factors being at play. Our study illustrates how computational resources can be used to address research questions in Cognitive Science.

pdf
Run Like a Girl! Sport-Related Gender Bias in Language and Vision
Sophia Harrison | Eleonora Gualdoni | Gemma Boleda
Findings of the Association for Computational Linguistics: ACL 2023

Gender bias in Language and Vision datasets and models has the potential to perpetuate harmful stereotypes and discrimination. We analyze gender bias in two Language and Vision datasets. Consistent with prior work, we find that both datasets underrepresent women, which promotes their invisibilization. Moreover, we hypothesize and find that a bias affects human naming choices for people playing sports: speakers produce names indicating the sport (e.g. “tennis player” or “surfer”) more often when it is a man or a boy participating in the sport than when it is a woman or a girl, with an average of 46% vs. 35% of sports-related names for each gender. A computational model trained on these naming data reproduces thebias. We argue that both the data and the model result in representational harm against women.

2022

pdf
Communication breakdown: On the low mutual intelligibility between human and neural captioning
Roberto Dessì | Eleonora Gualdoni | Francesca Franzon | Gemma Boleda | Marco Baroni
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

We compare the 0-shot performance of a neural caption-based image retriever when given as input either human-produced captions or captions generated by a neural captioner. We conduct this comparison on the recently introduced ImageCoDe data-set (Krojer et al. 2022), which contains hard distractors nearly identical to the images to be retrieved. We find that the neural retriever has much higher performance when fed neural rather than human captions, despite the fact that the former, unlike the latter, were generated without awareness of the distractors that make the task hard. Even more remarkably, when the same neural captions are given to human subjects, their retrieval performance is almost at chance level. Our results thus add to the growing body of evidence that, even when the “language” of neural models resembles English, this superficial resemblance might be deeply misleading.

pdf
CAT ManyNames: A New Dataset for Object Naming in Catalan
Mar Domínguez Orfila | Maite Melero Nogués | Gemma Boleda Torrent
Proceedings of the Workshop on Cognitive Aspects of the Lexicon

Object Naming is an important task within the field of Language and Vision that consists of generating a correct and appropriate name for an object given an image. The ManyNames dataset uses real-world human annotated images with multiple labels, instead of just one. In this work, we describe the adaptation of this dataset (originally in English) to Catalan, by (i) machine-translating the English labels and (ii) collecting human annotations for a subset of the original corpus and comparing both resources. Analyses reveal divergences in the lexical variation of the two sets showing potential problems of directly translated resources, particularly when there is no resource to a proper context, which in this case is conveyed by the image. The analysis also points to the impact of cultural factors in the naming task, which should be accounted for in future cross-lingual naming tasks.

pdf
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages
Thomas Brochhagen | Gemma Boleda
Proceedings of the first workshop on NLP applications to field linguistics

It is common for languages to express multiple meanings with the same word, a phenomenon known as colexification. For instance, the meanings FINGER and TOE colexify in the word “dedo” in Spanish, while they do not colexify in English. Colexification has been suggested to follow universal constraints. In particular, previous work has shown that related meanings are more prone to colexify. This tendency has been explained in terms of the cognitive pressure for ease, since expressing related meanings with the same word makes lexicons easier to learn and use. The present study examines the interplay between this pressure and a competing universal constraint, the functional pressure for languages to maximize informativeness. We hypothesize that meanings are more likely to colexify if they are related (fostering ease), but not so related as to become confusable and cause misunderstandings (fostering informativeness). We find support for this principle in data from over 1200 languages and 1400 meanings. Our results thus suggest that universal principles shape the lexicons of natural languages. More broadly, they contribute to the growing body of evidence suggesting that languages evolve to strike a balance between competing functional and cognitive pressures.

pdf
Challenges in including extra-linguistic context in pre-trained language models
Ionut Sorodoc | Laura Aina | Gemma Boleda
Proceedings of the Third Workshop on Insights from Negative Results in NLP

To successfully account for language, computational models need to take into account both the linguistic context (the content of the utterances) and the extra-linguistic context (for instance, the participants in a dialogue). We focus on a referential task that asks models to link entity mentions in a TV show to the corresponding characters, and design an architecture that attempts to account for both kinds of context. In particular, our architecture combines a previously proposed specialized module (an “entity library”) for character representation with transfer learning from a pre-trained language model. We find that, although the model does improve linguistic contextualization, it fails to successfully integrate extra-linguistic information about the participants in the dialogue. Our work shows that it is very challenging to incorporate extra-linguistic information into pre-trained language models.

pdf
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages
Thomas Brochhagen | Gemma Boleda
Proceedings of the Society for Computation in Linguistics 2022

pdf
Horse or pony? Visual typicality and lexical frequency affect variability in object naming
Eleonora Gualdoni | Andreas Madebach | Thomas Brochhagen | Gemma Boleda
Proceedings of the Society for Computation in Linguistics 2022

2021

pdf
Does referent predictability affect the choice of referential form? A computational approach using masked coreference resolution
Laura Aina | Xixian Liao | Gemma Boleda | Matthijs Westera
Proceedings of the 25th Conference on Computational Natural Language Learning

It is often posited that more predictable parts of a speaker’s meaning tend to be made less explicit, for instance using shorter, less informative words. Studying these dynamics in the domain of referring expressions has proven difficult, with existing studies, both psycholinguistic and corpus-based, providing contradictory results. We test the hypothesis that speakers produce less informative referring expressions (e.g., pronouns vs. full noun phrases) when the context is more informative about the referent, using novel computational estimates of referent predictability. We obtain these estimates training an existing coreference resolution system for English on a new task, masked coreference resolution, giving us a probability distribution over referents that is conditioned on the context but not the referring expression. The resulting system retains standard coreference resolution performance while yielding a better estimate of human-derived referent predictability than previous attempts. A statistical analysis of the relationship between model output and mention form supports the hypothesis that predictability affects the form of a mention, both its morphosyntactic type and its length.

pdf
Controlled tasks for model analysis: Retrieving discrete information from sequences
Ionut-Teodor Sorodoc | Gemma Boleda | Marco Baroni
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

In recent years, the NLP community has shown increasing interest in analysing how deep learning models work. Given that large models trained on complex tasks are difficult to inspect, some of this work has focused on controlled tasks that emulate specific aspects of language. We propose a new set of such controlled tasks to explore a crucial aspect of natural language processing that has not received enough attention: the need to retrieve discrete information from sequences. We also study model behavior on the tasks with simple instantiations of Transformers and LSTMs. Our results highlight the beneficial role of decoder attention and its sometimes unexpected interaction with other components. Moreover, we show that, for most of the tasks, these simple models still show significant difficulties. We hope that the community will take up the analysis possibilities that our tasks afford, and that a clearer understanding of model behavior on the tasks will lead to better and more transparent models.

2020

pdf
Humans Meet Models on Object Naming: A New Dataset and Analysis
Carina Silberer | Sina Zarrieß | Matthijs Westera | Gemma Boleda
Proceedings of the 28th International Conference on Computational Linguistics

We release ManyNames v2 (MN v2), a verified version of an object naming dataset that contains dozens of valid names per object for 25K images. We analyze issues in the data collection method originally employed, standard in Language & Vision (L&V), and find that the main source of noise in the data comes from simulating a naming context solely from an image with a target object marked with a bounding box, which causes subjects to sometimes disagree regarding which object is the target. We also find that both the degree of this uncertainty in the original data and the amount of true naming variation in MN v2 differs substantially across object domains. We use MN v2 to analyze a popular L&V model and demonstrate its effectiveness on the task of object naming. However, our fine-grained analysis reveals that what appears to be human-like model behavior is not stable across domains, e.g., the model confuses people and clothing objects much more frequently than humans do. We also find that standard evaluations underestimate the actual effectiveness of the naming model: on the single-label names of the original dataset (Visual Genome), it obtains −27% accuracy points than on MN v2, that includes all valid object names.

pdf
Object Naming in Language and Vision: A Survey and a New Dataset
Carina Silberer | Sina Zarrieß | Gemma Boleda
Proceedings of the Twelfth Language Resources and Evaluation Conference

People choose particular names for objects, such as dog or puppy for a given dog. Object naming has been studied in Psycholinguistics, but has received relatively little attention in Computational Linguistics. We review resources from Language and Vision that could be used to study object naming on a large scale, discuss their shortcomings, and create a new dataset that affords more opportunities for analysis and modeling. Our dataset, ManyNames, provides 36 name annotations for each of 25K objects in images selected from VisualGenome. We highlight the challenges involved and provide a preliminary analysis of the ManyNames data, showing that there is a high level of agreement in naming, on average. At the same time, the average number of name types associated with an object is much higher in our dataset than in existing corpora for Language and Vision, such that ManyNames provides a rich resource for studying phenomena like hierarchical variation (chihuahua vs. dog), which has been discussed at length in the theoretical literature, and other less well studied phenomena like cross-classification (cake vs. dessert).

pdf
Probing for Referential Information in Language Models
Ionut-Teodor Sorodoc | Kristina Gulordava | Gemma Boleda
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Language models keep track of complex information about the preceding context – including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, via probe tasks and analysis on a coreference annotated corpus. The Transformer outperforms the LSTM in all analyses. Our results suggest that language models are more successful at learning grammatical constraints than they are at learning truly referential information, in the sense of capturing the fact that we use language to refer to entities in the world. However, we find traces of the latter aspect, too.

2019

pdf
Putting Words in Context: LSTM Language Models and Lexical Ambiguity
Laura Aina | Kristina Gulordava | Gemma Boleda
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.

pdf
Short-Term Meaning Shift: A Distributional Exploration
Marco Del Tredici | Raquel Fernández | Gemma Boleda
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We present the first exploration of meaning shift over short periods of time in online communities using distributional representations. We create a small annotated dataset and use it to assess the performance of a standard model for meaning shift detection on short-term meaning shift. We find that the model has problems distinguishing meaning shift from referential phenomena, and propose a measure of contextual variability to remedy this.

pdf
What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Laura Aina | Carina Silberer | Ionut-Teodor Sorodoc | Matthijs Westera | Gemma Boleda
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Humans use language to refer to entities in the external world. Motivated by this, in recent years several models that incorporate a bias towards learning entity representations have been proposed. Such entity-centric models have shown empirical success, but we still know little about why. In this paper we analyze the behavior of two recently proposed entity-centric models in a referential task, Entity Linking in Multi-party Dialogue (SemEval 2018 Task 4). We show that these models outperform the state of the art on this task, and that they do better on lower frequency entities than a counterpart model that is not entity-centric, with the same model size. We argue that making models entity-centric naturally fosters good architectural decisions. However, we also show that these models do not really build entity representations and that they make poor use of linguistic context. These negative results underscore the need for model analysis, to test whether the motivations for particular architectures are borne out in how models behave when deployed.

pdf
Don’t Blame Distributional Semantics if it can’t do Entailment
Matthijs Westera | Gemma Boleda
Proceedings of the 13th International Conference on Computational Semantics - Long Papers

Distributional semantics has had enormous empirical success in Computational Linguistics and Cognitive Science in modeling various semantic phenomena, such as semantic similarity, and distributional models are widely used in state-of-the-art Natural Language Processing systems. However, the theoretical status of distributional semantics within a broader theory of language and cognition is still unclear: What does distributional semantics model? Can it be, on its own, a fully adequate model of the meanings of linguistic expressions? The standard answer is that distributional semantics is not fully adequate in this regard, because it falls short on some of the central aspects of formal semantic approaches: truth conditions, entailment, reference, and certain aspects of compositionality. We argue that this standard answer rests on a misconception: These aspects do not belong in a theory of expression meaning, they are instead aspects of speaker meaning, i.e., communicative intentions in a particular context. In a slogan: words do not refer, speakers do. Clearing this up enables us to argue that distributional semantics on its own is an adequate model of expression meaning. Our proposal sheds light on the role of distributional semantics in a broader theory of language and cognition, its relationship to formal semantics, and its place in computational models.

2018

pdf
How to represent a word and predict it, too: Improving tied architectures for language modelling
Kristina Gulordava | Laura Aina | Gemma Boleda
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.

pdf
AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Laura Aina | Carina Silberer | Ionut-Teodor Sorodoc | Matthijs Westera | Gemma Boleda
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes our winning contribution to SemEval 2018 Task 4: Character Identification on Multiparty Dialogues. It is a simple, standard model with one key innovation, an entity library. Our results show that this innovation greatly facilitates the identification of infrequent characters. Because of the generic nature of our model, this finding is potentially relevant to any task that requires the effective learning from sparse or imbalanced data.

2017

pdf
Distributed Prediction of Relations for Entities: The Easy, The Difficult, and The Impossible
Abhijeet Gupta | Gemma Boleda | Sebastian Padó
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)

Word embeddings are supposed to provide easy access to semantic relations such as “male of” (man–woman). While this claim has been investigated for concepts, little is known about the distributional behavior of relations of (Named) Entities. We describe two word embedding-based models that predict values for relational attributes of entities, and analyse them. The task is challenging, with major performance differences between relations. Contrary to many NLP tasks, high difficulty for a relation does not result from low frequency, but from (a) one-to-many mappings; and (b) lack of context patterns expressing the relation that are easy to pick up by word embeddings.

pdf
Instances and concepts in distributional space
Gemma Boleda | Abhijeet Gupta | Sebastian Padó
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Instances (“Mozart”) are ontologically distinct from concepts or classes (“composer”). Natural language encompasses both, but instances have received comparatively little attention in distributional semantics. Our results show that instances and concepts differ in their distributional properties. We also establish that instantiation detection (“Mozart – composer”) is generally easier than hypernymy detection (“chemist – scientist”), and that results on the influence of input representation do not transfer from hyponymy to instantiation.

pdf
Talking about the world with a distributed model
Gemma Boleda
Proceedings of the 10th International Conference on Natural Language Generation

We use language to talk about the world, and so reference is a crucial property of language. However, modeling reference is particularly difficult, as it involves both continuous and discrete as-pects of language. For instance, referring expressions like “the big mug” or “it” typically contain content words (“big”, “mug”), which are notoriously fuzzy or vague in their meaning, and also fun-ction words (“the”, “it”) that largely serve as discrete pointers. Data-driven, distributed models based on distributional semantics or deep learning excel at the former, but struggle with the latter, and the reverse is true for symbolic models. I present ongoing work on modeling reference with a distribu-ted model aimed at capturing both aspects, and learns to refer directly from reference acts.

pdf
Living a discrete life in a continuous world: Reference in cross-modal entity tracking
Gemma Boleda | Sebastian Padó | Nghia The Pham | Marco Baroni
Proceedings of the 12th International Conference on Computational Semantics (IWCS) — Short papers

2016

pdf
Convolutional Neural Network Language Models
Ngoc-Quan Pham | German Kruszewski | Gemma Boleda
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
“Look, some Green Circles!”: Learning to Quantify from Images
Ionut Sorodoc | Angeliki Lazaridou | Gemma Boleda | Aurélie Herbelot | Sandro Pezzelle | Raffaella Bernardi
Proceedings of the 5th Workshop on Vision and Language

pdf bib
Formal Distributional Semantics: Introduction to the Special Issue
Gemma Boleda | Aurélie Herbelot
Computational Linguistics, Volume 42, Issue 4 - December 2016

pdf
The LAMBADA dataset: Word prediction requiring a broad discourse context
Denis Paperno | Germán Kruszewski | Angeliki Lazaridou | Ngoc Quan Pham | Raffaella Bernardi | Sandro Pezzelle | Marco Baroni | Gemma Boleda | Raquel Fernández
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics
Martha Palmer | Gemma Boleda | Paolo Rosso
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

pdf bib
Distributional vectors encode referential attributes
Abhijeet Gupta | Gemma Boleda | Marco Baroni | Sebastian Padó
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Distributional Semantics in Use
Raffaella Bernardi | Gemma Boleda | Raquel Fernández | Denis Paperno
Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics

2014

pdf
UTexas: Natural Language Semantics using Distributional Semantics and Probabilistic Logic
Islam Beltagy | Stephen Roller | Gemma Boleda | Katrin Erk | Raymond Mooney
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf
Inclusive yet Selective: Supervised Distributional Hypernymy Detection
Stephen Roller | Katrin Erk | Gemma Boleda
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf
Intensionality was only alleged: On adjective-noun composition in distributional semantics
Gemma Boleda | Marco Baroni | The Nghia Pham | Louise McNally
Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Long Papers

bib
Proceedings of the IWCS 2013 Workshop Towards a Formal Distributional Semantics
Aurelie Herbelot | Roberto Zamparelli | Gemma Boleda
Proceedings of the IWCS 2013 Workshop Towards a Formal Distributional Semantics

pdf bib
Montague Meets Markov: Deep Semantics with Probabilistic Logical Form
Islam Beltagy | Cuong Chau | Gemma Boleda | Dan Garrette | Katrin Erk | Raymond Mooney
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

2012

pdf
Modeling Regular Polysemy: A Study on the Semantic Classification of Catalan Adjectives
Gemma Boleda | Sabine Schulte im Walde | Toni Badia
Computational Linguistics, Volume 38, Issue 3 - September 2012

pdf
Distributional Semantics in Technicolor
Elia Bruni | Gemma Boleda | Marco Baroni | Nam-Khanh Tran
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Regular polysemy: A distributional model
Gemma Boleda | Sebastian Padó | Jason Utt
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

pdf
First Order vs. Higher Order Modification in Distributional Semantics
Gemma Boleda | Eva Maria Vecchi | Miquel Cornudella | Louise McNally
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Extending the tool, or how to annotate historical language varieties
Cristina Sánchez-Marco | Gemma Boleda | Lluís Padró
Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities

2010

pdf
Wikicorpus: A Word-Sense Disambiguated Multilingual Wikipedia Corpus
Samuel Reese | Gemma Boleda | Montse Cuadros | Lluís Padró | German Rigau
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.

pdf
ADN-Classifier:Automatically Assigning Denotation Types to Nominalizations
Aina Peris | Mariona Taulé | Gemma Boleda | Horacio Rodríguez
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents the ADN-Classifier, an Automatic classification system of Spanish Deverbal Nominalizations aimed at identifying its semantic denotation (i.e. event, result, underspecified, or lexicalized). The classifier can be used for NLP tasks such as coreference resolution or paraphrase detection. To our knowledge, the ADN-Classifier is the first effort in acquisition of denotations for nominalizations using Machine Learning. We compare the results of the classifier when using a decreasing number of Knowledge Sources, namely (1) the complete nominal lexicon (AnCora-Nom) that includes sense distictions, (2) the nominal lexicon (AnCora-Nom) removing the sense-specific information, (3) nominalizations’ context information obtained from a treebank corpus (AnCora-Es) and (4) the combination of the previous linguistic resources. In a realistic scenario, that is, without sense distinction, the best results achieved are those taking into account the information declared in the lexicon (89.40% accuracy). This shows that the lexicon contains crucial information (such as argument structure) that corpus-derived features cannot substitute for.

pdf
The Database of Catalan Adjectives
Roser Sanromà | Gemma Boleda
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

We present the Database of Catalan Adjectives (DCA), a database with 2,296 adjective lemmata enriched with morphological, syntactic and semantic information. This set of adjectives has been collected from a fragment of the Corpus Textual Informatitzat de la Llengua Catalana of the Institut d’Estudis Catalans and constitutes a representative sample of the adjective class in Catalan as a whole. The database includes both manually coded and automatically extracted information regarding the most prominent properties used in the literature regarding the semantics of adjectives, such as morphological origin, suffix (if any), predicativity, gradability, adjective position with respect to the head noun, adjective modifiers, or semantic class. The DCA can be useful for NLP applications using adjectives (from POS-taggers to Opinion Mining applications) and for linguistic analysis regarding the morphological, syntactic, and semantic properties of adjectives. We now make it available to the research community under a Creative Commons Attribution Share Alike 3.0 Spain license.

pdf
Annotation and Representation of a Diachronic Corpus of Spanish
Cristina Sánchez-Marco | Gemma Boleda | Josep Maria Fontana | Judith Domingo
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this article we describe two different strategies for the automatic tagging of a Spanish diachronic corpus involving the adaptation of existing NLP tools developed for modern Spanish. In the initial approach we follow a state-of-the-art strategy, which consists on standardizing the spelling and the lexicon. This approach boosts POS-tagging accuracy to 90, which represents a raw improvement of over 20% with respect to the results obtained without any pre-processing. In order to enable non-expert users in NLP to use this new resource, the corpus has been integrated into IAC (Corpora Interface Access). We discuss the shortcomings of the initial approach and propose a new one, which does not consist in adapting the source texts to the tagger, but rather in modifying the tagger for the direct treatment of the old variants. This second strategy addresses some important shortcomings in the previous approach and is likely to be useful not only in the creation of diachronic linguistic resources but also for the treatment of dialectal or non-standard variants of synchronic languages as well.

pdf
Language Technology Challenges of a ‘Small’ Language (Catalan)
Maite Melero | Gemma Boleda | Montse Cuadros | Cristina España-Bonet | Lluís Padró | Martí Quixal | Carlos Rodríguez | Roser Saurí
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this paper, we present a brief snapshot of the state of affairs in computational processing of Catalan and the initiatives that are starting to take place in an effort to bring the field a step forward, by making a better and more efficient use of the already existing resources and tools, by bridging the gap between research and market, and by establishing periodical meeting points for the community. In particular, we present the results of the First Workshop on the Computational Processing of Catalan, which succeeded in putting together a fair representation of the research in the area, and received attention from both the industry and the administration. Aside from facilitating communication among researchers and between developers and users, the Workshop provided the organizers with valuable information about existing resources, tools, developers and providers. This information has allowed us to go a step further by setting up a “harvesting” procedure which will hopefully build the seed of a portal-catalogue-observatory of language resources and technologies in Catalan.

2008

pdf bib
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics
Ron Artstein | Gemma Boleda | Frank Keller | Sabine Schulte im Walde
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics

pdf
Evaluation of a Machine Translation System for Low Resource Languages: METIS-II
Vincent Vandeghinste | Peter Dirix | Ineke Schuurman | Stella Markantonatou | Sokratis Sofianopoulos | Marina Vassiliou | Olga Yannoutsou | Toni Badia | Maite Melero | Gemma Boleda | Michael Carl | Paul Schmidt
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

In this paper we describe the METIS-II system and its evaluation on each of the language pairs: Dutch, German, Greek, and Spanish to English. The METIS-II system envisaged developing a data-driven approach in which no parallel corpus is required, and in which no full parser or extensive rule sets are needed. We describe evalution on a development test set and on a test set coming from Europarl, and compare our results with SYSTRAN. We also provide some further analysis, researching the impact of the number and source of the reference translations and analysing the results according to test text type. The results are expectably lower for the METIS system, but not at an unatainable distance from a mature system like SYSTRAN.

2007

pdf
Modelling Polysemy in Adjective Classes by Multi-Label Classification
Gemma Boleda | Sabine Schulte im Walde | Toni Badia
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf
CUCWeb: A Catalan corpus built from the Web
Gemma Boleda | Stefan Bott | Rodrigo Meza | Carlos Castillo | Toni Badia | Vicente López
Proceedings of the 2nd International Workshop on Web as Corpus

2005

pdf bib
An n-gram Approach to Exploiting a Monolingual Corpus for Machine Translation
Toni Badia | Gemma Boleda | Maite Melero | Antoni Oliver
Workshop on example-based machine translation

pdf
Morphology vs. Syntax in Adjective Class Acquisition
Gemma Boleda | Toni Badia | Sabine Schulte im Walde
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition

2004

pdf
The Influence of Argument Structure on Semantic Role Assignment
Sebastian Padó | Gemma Boleda
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

pdf
Acquisition of Semantic Classes for Adjectives from Distributional Evidence
Gemma Boleda | Toni Badia | Eloi Batlle
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf
Clustering Adjectives for Class Discovery
Gemma Boleda Torrent | Laura Alonso i Alemany
Student Research Workshop

2002

pdf
CATCG: a general purpose parsing tool applied
Alex Alsina | Toni Badia | Gemma Boleda | Stefan Bott | Àngel Gil | Martí Quixal | Oriol Valentín
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

Search