Sabine Schulte im Walde

Also published as: Sabine Schulte Im Walde, Sabine Schulte im Walde, Sabine Schulte im Walde, Sabine Schulte in Walde


2024

pdf
The DURel Annotation Tool: Human and Computational Measurement of Semantic Proximity, Sense Clusters and Semantic Change
Dominik Schlechtweg | Shafqat Mumtaz Virk | Pauline Sander | Emma Sköldberg | Lukas Theuer Linke | Tuo Zhang | Nina Tahmasebi | Jonas Kuhn | Sabine Schulte Im Walde
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations

We present the DURel tool implementing the annotation of semantic proximity between word uses into an online, open source interface. The tool supports standardized human annotation as well as computational annotation, building on recent advances with Word-in-Context models. Annotator judgments are clustered with automatic graph clustering techniques and visualized for analysis. This allows to measure word senses with simple and intuitive micro-task judgments between use pairs, requiring minimal preparation efforts. The tool offers additional functionalities to compare the agreement between annotators to guarantee the inter-subjectivity of the obtained judgments and to calculate summary statistics over the annotated data giving insights into sense frequency distributions, semantic variation or changes of senses over time.

pdf
What Can Diachronic Contexts and Topics Tell Us about the Present-Day Compositionality of English Noun Compounds?
Samin Mahdizadeh Sani | Malak Rassem | Chris Jenkins | Filip Miletić | Sabine Schulte im Walde
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Predicting the compositionality of noun compounds such as climate change and tennis elbow is a vital component in natural language understanding. While most previous computational methods that automatically determine the semantic relatedness between compounds and their constituents have applied a synchronic perspective, the current study investigates what diachronic changes in contexts and semantic topics of compounds and constituents reveal about the compounds’ present-day degrees of compositionality. We define a binary classification task that utilizes two diachronic vector spaces based on contextual co-occurrences and semantic topics, and demonstrate that diachronic changes in cosine similarities – measured over context or topic distributions – uncover patterns that distinguish between compounds with low and high present-day compositionality. Despite fewer dimensions in the topic models, the topic space performs on par with the co-occurrence space and captures rather similar information. Temporal similarities between compounds and modifiers as well as between compounds and their prepositional paraphrases predict the compounds’ present-day compositionality with accuracy >0.7.

pdf
Willkommens-Merkel, Chaos-Johnson, and Tore-Klose: Modeling the Evaluative Meaning of German Personal Name Compounds
Annerose Eichel | Tana Deeg | Andre Blessing | Milena Belosevic | Sabine Arndt-Lappe | Sabine Schulte im Walde
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

We present a comprehensive computational study of the under-investigated phenomenon of personal name compounds (PNCs) in German such as Willkommens-Merkel (‘Welcome-Merkel’). Prevalent in news, social media, and political discourse, PNCs are hypothesized to exhibit an evaluative function that is reflected in a more positive or negative perception as compared to the respective personal full name (such as Angela Merkel). We model 321 PNCs and their corresponding full names at discourse level, and show that PNCs bear an evaluative nature that can be captured through a variety of computational methods. Specifically, we assess through valence information whether a PNC is more positively or negatively evaluative than the person’s name, by applying and comparing two approaches using (i) valence norms and (ii) pre-trained language models (PLMs). We further enrich our data with personal, domain-specific, and extra-linguistic information and perform a range of regression analyses revealing that factors including compound and modifier valence, domain, and political party membership influence how a PNC is evaluated.

pdf
Semantics of Multiword Expressions in Transformer-Based Models: A Survey
Filip Miletić | Sabine Schulte im Walde
Transactions of the Association for Computational Linguistics, Volume 12

Multiword expressions (MWEs) are composed of multiple words and exhibit variable degrees of compositionality. As such, their meanings are notoriously difficult to model, and it is unclear to what extent this issue affects transformer architectures. Addressing this gap, we provide the first in-depth survey of MWE processing with transformer models. We overall find that they capture MWE semantics inconsistently, as shown by reliance on surface patterns and memorized information. MWE meaning is also strongly localized, predominantly in early layers of the architecture. Representations benefit from specific linguistic properties, such as lower semantic idiosyncrasy and ambiguity of target expressions. Our findings overall question the ability of transformer models to robustly capture fine-grained semantics. Furthermore, we highlight the need for more directly comparable evaluation setups.

pdf
Comparison of Image Generation Models for Abstract and Concrete Event Descriptions
Mohammed Khaliq | Diego Frassinelli | Sabine Schulte Im Walde
Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)

With the advent of diffusion-based image generation models such as DALL-E, Stable Diffusion and Midjourney, high quality images can be easily generated using textual inputs. It is unclear, however, to what extent the generated images resemble human mental representations, especially regarding abstract event knowledge. We analyse the capability of four state-of-the-art models in generating images of verb-object event pairs when we systematically manipulate the degrees of abstractness of both the verbs and the object nouns. Human judgements assess the generated images and demonstrate that DALL-E is strongest for event pairs with concrete nouns (e.g., “pour water”; “believe person”), while Midjourney is preferred for event pairs with abstract nouns (e.g., “raise awareness”; “remain mystery”), irrespective of the concreteness of the verb. Across models, humans were most unsatisfied with images of events pairs that combined concrete verbs with abstract direct-object nouns (e.g., “speak truth”), and an additional ad-hoc annotation contributes this to its potential for figurative language.

pdf
Cross-Lingual Metaphor Detection for Low-Resource Languages
Anna Hülsing | Sabine Schulte Im Walde
Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024)

Research on metaphor detection (MD) in a multilingual setup has recently gained momentum. As for many tasks, it is however unclear how the amount of data used to pretrain large language models affects the performance, and whether non-neural models might provide a reasonable alternative, especially for MD in low-resource languages. This paper compares neural and non-neural cross-lingual models for English as the source language and Russian, German and Latin as target languages. In a series of experiments we show that the neural cross-lingual adapter architecture MAD-X performs best across target languages. Zero-shot classification with mBERT achieves decent results above the majority baseline, while few-shot classification with mBERT heavily depends on shot-selection, which is inconvenient in a cross-lingual setup where no validation data for the target language exists. The non-neural model, a random forest classifier with conceptual features, is outperformed by the neural models. Overall, we recommend MAD-X for metaphor detection not only in high-resource but also in low-resource scenarios regarding the amounts of pretraining data for mBERT.

pdf
VOLIMET: A Parallel Corpus of Literal and Metaphorical Verb-Object Pairs for English–German and English–French
Prisca Piccirilli | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

The interplay of cultural and linguistic elements that characterizes metaphorical language poses a substantial challenge for both human comprehension and machine processing. This challenge goes beyond monolingual settings and becomes particularly complex in translation, even more so in automatic translation. We present VOLIMET, a corpus of 2,916 parallel sentences containing gold standard alignments of metaphorical verb-object pairs and their literal paraphrases, e.g., tackle/address question, from English to German and French. On the one hand, the parallel nature of our corpus enables us to explore monolingual patterns for metaphorical vs. literal uses in English. On the other hand, we investigate different aspects of cross-lingual translations into German and French and the extent to which metaphoricity and literalness in the source language are transferred to the target languages. Monolingually, our findings reveal clear preferences in using metaphorical or literal uses of verb-object pairs. Cross-lingually, we observe a rich variability in translations as well as different behaviors for our two target languages.

pdf
Evaluating Semantic Relations in Predicting Textual Labels for Images of Abstract and Concrete Concepts
Tarun Tater | Sabine Schulte Im Walde | Diego Frassinelli
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

This study investigates the performance of SigLIP, a state-of-the-art Vision-Language Model (VLM), in predicting labels for images depicting 1,278 concepts. Our analysis across 300 images per concept shows that the model frequently predicts the exact user-tagged labels, but similarly, it often predicts labels that are semantically related to the exact labels in various ways: synonyms, hypernyms, co-hyponyms, and associated words, particularly for abstract concepts. We then zoom into the diversity of the user tags of images and word associations for abstract versus concrete concepts. Surprisingly, not only abstract but also concrete concepts exhibit significant variability, thus challenging the traditional view that representations of concrete concepts are less diverse.

pdf
Multi-Task Learning with Adapters for Plausibility Prediction: Bridging the Gap or Falling into the Trenches?
Annerose Eichel | Sabine Schulte Im Walde
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP

We present a multi-task learning approach to predicting semantic plausibility by leveraging 50+ adapters categorized into 17 tasks within an efficient training framework. Across four plausibility datasets in English of varying size and linguistic constructions, we compare how models provided with knowledge from a range of NLP tasks perform in contrast to models without external information. Our results show that plausibility prediction benefits from complementary knowledge (e.g., provided by syntactic tasks) are significant but non-substantial, while performance may be hurt when injecting knowledge from an unsuitable task. Similarly important, we find that knowledge transfer may be hindered by class imbalance, and demonstrate the positive yet minor effect of balancing training data, even at the expense of size.

2023

pdf
Made of Steel? Learning Plausible Materials for Components in the Vehicle Repair Domain
Annerose Eichel | Helena Schlipf | Sabine Schulte im Walde
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

We propose a novel approach to learn domain-specific plausible materials for components in the vehicle repair domain by probing Pretrained Language Models (PLMs) in a cloze task style setting to overcome the lack of annotated datasets. We devise a new method to aggregate salient predictions from a set of cloze query templates and show that domain-adaptation using either a small, high-quality or a customized Wikipedia corpus boosts performance. When exploring resource-lean alternatives, we find a distilled PLM clearly outperforming a classic pattern-based algorithm. Further, given that 98% of our domain-specific components are multiword expressions, we successfully exploit the compositionality assumption as a way to address data sparsity.

pdf
A Systematic Search for Compound Semantics in Pretrained BERT Architectures
Filip Miletic | Sabine Schulte im Walde
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

To date, transformer-based models such as BERT have been less successful in predicting compositionality of noun compounds than static word embeddings. This is likely related to a suboptimal use of the encoded information, reflecting an incomplete grasp of how the models represent the meanings of complex linguistic structures. This paper investigates variants of semantic knowledge derived from pretrained BERT when predicting the degrees of compositionality for 280 English noun compounds associated with human compositionality ratings. Our performance strongly improves on earlier unsupervised implementations of pretrained BERT and highlights beneficial decisions in data preprocessing, embedding computation, and compositionality estimation. The distinct linguistic roles of heads and modifiers are reflected by differences in BERT-derived representations, with empirical properties such as frequency, productivity, and ambiguity affecting model performance. The most relevant representational information is concentrated in the initial layers of the model architecture.

pdf
A Dataset for Physical and Abstract Plausibility and Sources of Human Disagreement
Annerose Eichel | Sabine Schulte Im Walde
Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)

We present a novel dataset for physical and abstract plausibility of events in English. Based on naturally occurring sentences extracted from Wikipedia, we infiltrate degrees of abstractness, and automatically generate perturbed pseudo-implausible events. We annotate a filtered and balanced subset for plausibility using crowd-sourcing, and perform extensive cleansing to ensure annotation quality. In-depth quantitative analyses indicate that annotators favor plausibility over implausibility and disagree more on implausible events. Furthermore, our plausibility dataset is the first to capture abstractness in events to the same extent as concreteness, and we find that event abstractness has an impact on plausibility ratings: more concrete event participants trigger a perception of implausibility.

pdf
Classifying Noun Compounds for Present-Day Compositionality: Contributions of Diachronic Frequency and Productivity Patterns
Maximilian M. Maurer | Chris Jenkins | Filip Miletić | Sabine Schulte im Walde
Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)

pdf
To Split or Not to Split: Composing Compounds in Contextual Vector Spaces
Chris Jenkins | Filip Miletic | Sabine Schulte im Walde
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

We investigate the effect of sub-word tokenization on representations of German noun compounds: single orthographic words which are composed of two or more constituents but often tokenized into units that are not morphologically motivated or meaningful. Using variants of BERT models and tokenization strategies on domain-specific restricted diachronic data, we introduce a suite of evaluations relying on the masked language modelling task and compositionality prediction. We obtain the most consistent improvements by pre-splitting compounds into constituents.

pdf
Investigating the Nature of Disagreements on Mid-Scale Ratings: A Case Study on the Abstractness-Concreteness Continuum
Urban Knupleš | Diego Frassinelli | Sabine Schulte im Walde
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)

Humans tend to strongly agree on ratings on a scale for extreme cases (e.g., a CAT is judged as very concrete), but judgements on mid-scale words exhibit more disagreement. Yet, collected rating norms are heavily exploited across disciplines. Our study focuses on concreteness ratings and (i) implements correlations and supervised classification to identify salient multi-modal characteristics of mid-scale words, and (ii) applies a hard clustering to identify patterns of systematic disagreement across raters. Our results suggest to either fine-tune or filter mid-scale target words before utilising them.

2022

pdf
Concreteness vs. Abstractness: A Selectional Preference Perspective
Tarun Tater | Diego Frassinelli | Sabine Schulte im Walde
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop

Concrete words refer to concepts that are strongly experienced through human senses (banana, chair, salt, etc.), whereas abstract concepts are less perceptually salient (idea, glory, justice, etc.). A clear definition of abstractness is crucial for the understanding of human cognitive processes and for the development of natural language applications such as figurative language detection. In this study, we investigate selectional preferences as a criterion to distinguish between concrete and abstract concepts and words: we hypothesise that abstract and concrete verbs and nouns differ regarding the semantic classes of their arguments. Our study uses a collection of 5,438 nouns and 1,275 verbs to exploit selectional preferences as a salient characteristic in classifying English abstract vs. concrete words, and in predicting their concreteness scores. We achieve an f1-score of 0.84 for nouns and 0.71 for verbs in classification, and Spearman’s ρ correlation of 0.86 for nouns and 0.59 for verbs.

pdf
DiaWUG: A Dataset for Diatopic Lexical Semantic Variation in Spanish
Gioia Baldissin | Dominik Schlechtweg | Sabine Schulte im Walde
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We provide a novel dataset – DiaWUG – with judgements on diatopic lexical semantic variation for six Spanish variants in Europe and Latin America. In contrast to most previous meaning-based resources and studies on semantic diatopic variation, we collect annotations on semantic relatedness for Spanish target words in their contexts from both a semasiological perspective (i.e., exploring the meanings of a word given its form, thus including polysemy) and an onomasiological perspective (i.e., exploring identical meanings of words with different forms, thus including synonymy). In addition, our novel dataset exploits and extends the existing framework DURel for annotating word senses in context (Erk et al., 2013; Schlechtweg et al., 2018) and the framework-embedded Word Usage Graphs (WUGs) – which up to now have mainly be used for semasiological tasks and resources – in order to distinguish, visualize and interpret lexical semantic variation of contextualized words in Spanish from these two perspectives, i.e., semasiological and onomasiological language variation.

pdf
Features of Perceived Metaphoricity on the Discourse Level: Abstractness and Emotionality
Prisca Piccirilli | Sabine Schulte im Walde
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Research on metaphorical language has shown ties between abstractness and emotionality with regard to metaphoricity; prior work is however limited to the word and sentence levels, and up to date there is no empirical study establishing the extent to which this is also true on the discourse level. This paper explores which textual and perceptual features human annotators perceive as important for the metaphoricity of discourses and expressions, and addresses two research questions more specifically. First, is a metaphorically-perceived discourse more abstract and more emotional in comparison to a literally- perceived discourse? Second, is a metaphorical expression preceded by a more metaphorical/abstract/emotional context than a synonymous literal alternative? We used a dataset of 1,000 corpus-extracted discourses for which crowdsourced annotators (1) provided judgements on whether they perceived the discourses as more metaphorical or more literal, and (2) systematically listed lexical terms which triggered their decisions in (1). Our results indicate that metaphorical discourses are more emotional and to a certain extent more abstract than literal discourses. However, neither the metaphoricity nor the abstractness and emotionality of the preceding discourse seem to play a role in triggering the choice between synonymous metaphorical vs. literal expressions. Our dataset is available at https://www.ims.uni-stuttgart.de/data/discourse-met-lit.

pdf
Investigating Independence vs. Control: Agenda-Setting in Russian News Coverage on Social Media
Annerose Eichel | Gabriella Lapesa | Sabine Schulte im Walde
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Agenda-setting is a widely explored phenomenon in political science: powerful stakeholders (governments or their financial supporters) have control over the media and set their agenda: political and economical powers determine which news should be salient. This is a clear case of targeted manipulation to divert the public attention from serious issues affecting internal politics (such as economic downturns and scandals) by flooding the media with potentially distracting information. We investigate agenda-setting in the Russian social media landscape, exploring the relation between economic indicators and mentions of foreign geopolitical entities, as well as of Russia itself. Our contributions are at three levels: at the level of the domain of the investigation, our study is the first to substructure the Russian media landscape in state-controlled vs. independent outlets in the context of strategic distraction from negative economic trends; at the level of the scope of the investigation, we involve a large set of geopolitical entities (while previous work has focused on the U.S.); at the qualitative level, our analysis of posts on Ukraine, whose relationship with Russia is of high geopolitical relevance, provides further insights into the contrast between state-controlled and independent outlets.

pdf bib
Figurative Language in Noun Compound Models across Target Properties, Domains and Time
Sabine Schulte im Walde
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022

A variety of distributional and multi-modal computational approaches has been suggested for modelling the degrees of compositionality across types of multiword expressions and languages. As the starting point of my talk, I will present standard variants of computational models that have been proven successful in predicting the compositionality of German and English noun compounds. The main part of the talk will then be concerned with investigating the general reliability of these standard models and discussing implications for gold-standard datasets: I will demonstrate how prediction results vary (i) across representations, (ii) across empirical target properties, (iii) across compound types, (iv) across levels of abstractness, and (v) for general- vs. domain-specific language. Finally, I will present a preliminary quantitative study on diachronic changes of noun compound meanings and compositionality over time.

pdf
What Drives the Use of Metaphorical Language? Negative Insights from Abstractness, Affect, Discourse Coherence and Contextualized Word Representations
Prisca Piccirilli | Sabine Schulte Im Walde
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics

Given a specific discourse, which discourse properties trigger the use of metaphorical language, rather than using literal alternatives? For example, what drives people to say grasp the meaning rather than understand the meaning within a specific context? Many NLP approaches to metaphorical language rely on cognitive and (psycho-)linguistic insights and have successfully defined models of discourse coherence, abstractness and affect. In this work, we build five simple models relying on established cognitive and linguistic properties ? frequency, abstractness, affect, discourse coherence and contextualized word representations ? to predict the use of a metaphorical vs. synonymous literal expression in context. By comparing the models? outputs to human judgments, our study indicates that our selected properties are not sufficient to systematically explain metaphorical vs. literal language choices.

2021

pdf
Explaining and Improving BERT Performance on Lexical Semantic Change Detection
Severin Laicher | Sinan Kurtyigit | Dominik Schlechtweg | Jonas Kuhn | Sabine Schulte im Walde
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop

Type- and token-based embedding architectures are still competing in lexical semantic change detection. The recent success of type-based models in SemEval-2020 Task 1 has raised the question why the success of token-based models on a variety of other NLP tasks does not translate to our field. We investigate the influence of a range of variables on clusterings of BERT vectors and show that its low performance is largely due to orthographic information on the target word, which is encoded even in the higher layers of BERT representations. By reducing the influence of orthography we considerably improve BERT’s performance.

pdf
Contextual Choice between Synonymous Pairs of Metaphorical and Literal Expressions: An Empirical Study and Novel Dataset to tackle or to address the Question
Prisca Piccirilli | Sabine Schulte im Walde
Proceedings of the First Workshop on Integrating Perspectives on Discourse Annotation

pdf
WordGuess: Using Associations for Guessing, Learning and Exploring Related Words
Cennet Oguz | André Blessing | Jonas Kuhn | Sabine Schulte Im Walde
Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)

pdf
More than just Frequency? Demasking Unsupervised Hypernymy Prediction Methods
Thomas Bott | Dominik Schlechtweg | Sabine Schulte im Walde
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Regression Analysis of Lexical and Morpho-Syntactic Properties of Kiezdeutsch
Diego Frassinelli | Gabriella Lapesa | Reem Alatrash | Dominik Schlechtweg | Sabine Schulte im Walde
Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects

Kiezdeutsch is a variety of German predominantly spoken by teenagers from multi-ethnic urban neighborhoods in casual conversations with their peers. In recent years, the popularity of Kiezdeutsch has increased among young people, independently of their socio-economic origin, and has spread in social media, too. While previous studies have extensively investigated this language variety from a linguistic and qualitative perspective, not much has been done from a quantitative point of view. We perform the first large-scale data-driven analysis of the lexical and morpho-syntactic properties of Kiezdeutsch in comparison with standard German. At the level of results, we confirm predictions of previous qualitative analyses and integrate them with further observations on specific linguistic phenomena such as slang and self-centered speaker attitude. At the methodological level, we provide logistic regression as a framework to perform bottom-up feature selection in order to quantify differences across language varieties.

pdf
Lexical Semantic Change Discovery
Sinan Kurtyigit | Maike Park | Dominik Schlechtweg | Jonas Kuhn | Sabine Schulte im Walde
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

While there is a large amount of research in the field of Lexical Semantic Change Detection, only few approaches go beyond a standard benchmark evaluation of existing models. In this paper, we propose a shift of focus from change detection to change discovery, i.e., discovering novel word senses over time from the full corpus vocabulary. By heavily fine-tuning a type-based and a token-based approach on recently published German data, we demonstrate that both models can successfully be applied to discover new words undergoing meaning change. Furthermore, we provide an almost fully automated framework for both evaluation and discovery.

pdf
Modeling Sense Structure in Word Usage Graphs with the Weighted Stochastic Block Model
Dominik Schlechtweg | Enrique Castaneda | Jonas Kuhn | Sabine Schulte im Walde
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics

We suggest to model human-annotated Word Usage Graphs capturing fine-grained semantic proximity distinctions between word uses with a Bayesian formulation of the Weighted Stochastic Block Model, a generative model for random graphs popular in biology, physics and social sciences. By providing a probabilistic model of graded word meaning we aim to approach the slippery and yet widely used notion of word sense in a novel way. The proposed framework enables us to rigorously compare models of word senses with respect to their fit to the data. We perform extensive experiments and select the empirically most adequate model.

pdf
Compound or Term Features? Analyzing Salience in Predicting the Difficulty of German Noun Compounds across Domains
Anna Hätty | Julia Bettinger | Michael Dorna | Jonas Kuhn | Sabine Schulte im Walde
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics

Predicting the difficulty of domain-specific vocabulary is an important task towards a better understanding of a domain, and to enhance the communication between lay people and experts. We investigate German closed noun compounds and focus on the interaction of compound-based lexical features (such as frequency and productivity) and terminology-based features (contrasting domain-specific and general language) across word representations and classifiers. Our prediction experiments complement insights from classification using (a) manually designed features to characterise termhood and compound formation and (b) compound and constituent word embeddings. We find that for a broad binary distinction into ‘easy’ vs. ‘difficult’ general-language compound frequency is sufficient, but for a more fine-grained four-class distinction it is crucial to include contrastive termhood features and compound and constituent features.

2020


Variants of Vector Space Reductions for Predicting the Compositionality of English Noun Compounds
Pegah Alipoormolabashi | Sabine Schulte im Walde
Proceedings of the Fourth Widening Natural Language Processing Workshop

Predicting the degree of compositionality of noun compounds is a crucial ingredient for lexicography and NLP applications, to know whether the compound should be treated as a whole, or through its constituents. Computational approaches for an automatic prediction typically represent compounds and their constituents within a vector space to have a numeric relatedness measure for the words. This paper provides a systematic evaluation of using different vector-space reduction variants for the prediction. We demonstrate that Word2vec and nouns-only dimensionality reductions are the most successful and stable vector space reduction variants for our task.

pdf
A Domain-Specific Dataset of Difficulty Ratings for German Noun Compounds in the Domains DIY, Cooking and Automotive
Julia Bettinger | Anna Hätty | Michael Dorna | Sabine Schulte im Walde
Proceedings of the Twelfth Language Resources and Evaluation Conference

We present a dataset with difficulty ratings for 1,030 German closed noun compounds extracted from domain-specific texts for do-it-ourself (DIY), cooking and automotive. The dataset includes two-part compounds for cooking and DIY, and two- to four-part compounds for automotive. The compounds were identified in text using the Simple Compound Splitter (Weller-Di Marco, 2017); a subset was filtered and balanced for frequency and productivity criteria as basis for manual annotation and fine-grained interpretation. This study presents the creation, the final dataset with ratings from 20 annotators and statistics over the dataset, to provide insight into the perception of domain-specific term difficulty. It is particularly striking that annotators agree on a coarse, binary distinction between easy vs. difficult domain-specific compounds but that a more fine grained distinction of difficulty is not meaningful. We finally discuss the challenges of an annotation for difficulty, which includes both the task description as well as the selection of the data basis.

pdf
Variants of Vector Space Reductions for Predicting the Compositionality of English Noun Compounds
Pegah Alipoor | Sabine Schulte im Walde
Proceedings of the Twelfth Language Resources and Evaluation Conference

Predicting the degree of compositionality of noun compounds such as “snowball” and “butterfly” is a crucial ingredient for lexicography and Natural Language Processing applications, to know whether the compound should be treated as a whole, or through its constituents, and what it means. Computational approaches for an automatic prediction typically represent and compare compounds and their constituents within a vector space and use distributional similarity as a proxy to predict the semantic relatedness between the compounds and their constituents as the compound’s degree of compositionality. This paper provides a systematic evaluation of vector-space reduction variants across kinds, exploring reductions based on part-of-speech next to and also in combination with Principal Components Analysis using Singular Value and word2vec embeddings. We show that word2vec and nouns only dimensionality reductions are the most successful and stable vector space variants for our task.

pdf
Varying Vector Representations and Integrating Meaning Shifts into a PageRank Model for Automatic Term Extraction
Anurag Nigam | Anna Hätty | Sabine Schulte im Walde
Proceedings of the Twelfth Language Resources and Evaluation Conference

We perform a comparative study for automatic term extraction from domain-specific language using a PageRank model with different edge-weighting methods. We vary vector space representations within the PageRank graph algorithm, and we go beyond standard co-occurrence and investigate the influence of measures of association strength and first- vs. second-order co-occurrence. In addition, we incorporate meaning shifts from general to domain-specific language as personalized vectors, in order to distinguish between termhood strengths of ambiguous words across word senses. Our study is performed for two domain-specific English corpora: ACL and do-it-yourself (DIY); and a domain-specific German corpus: cooking. The models are assessed by applying average precision and the roc score as evaluation metrices.

pdf
CCOHA: Clean Corpus of Historical American English
Reem Alatrash | Dominik Schlechtweg | Jonas Kuhn | Sabine Schulte im Walde
Proceedings of the Twelfth Language Resources and Evaluation Conference

Modelling language change is an increasingly important area of interest within the fields of sociolinguistics and historical linguistics. In recent years, there has been a growing number of publications whose main concern is studying changes that have occurred within the past centuries. The Corpus of Historical American English (COHA) is one of the most commonly used large corpora in diachronic studies in English. This paper describes methods applied to the downloadable version of the COHA corpus in order to overcome its main limitations, such as inconsistent lemmas and malformed tokens, without compromising its qualitative and distributional properties. The resulting corpus CCOHA contains a larger number of cleaned word tokens which can offer better insights into language change and allow for a larger variety of tasks to be performed.

pdf
Predicting Degrees of Technicality in Automatic Terminology Extraction
Anna Hätty | Dominik Schlechtweg | Michael Dorna | Sabine Schulte im Walde
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

While automatic term extraction is a well-researched area, computational approaches to distinguish between degrees of technicality are still understudied. We semi-automatically create a German gold standard of technicality across four domains, and illustrate the impact of a web-crawled general-language corpus on technicality prediction. When defining a classification approach that combines general-language and domain-specific word embeddings, we go beyond previous work and align vector spaces to gain comparative embeddings. We suggest two novel models to exploit general- vs. domain-specific comparisons: a simple neural network model with pre-computed comparative-embedding information as input, and a multi-channel model computing the comparison internally. Both models outperform previous approaches, with the multi-channel model performing best.

pdf
IMS at SemEval-2020 Task 1: How Low Can You Go? Dimensionality in Lexical Semantic Change Detection
Jens Kaiser | Dominik Schlechtweg | Sean Papay | Sabine Schulte im Walde
Proceedings of the Fourteenth Workshop on Semantic Evaluation

We present the results of our system for SemEval-2020 Task 1 that exploits a commonly used lexical semantic change detection model based on Skip-Gram with Negative Sampling. Our system focuses on Vector Initialization (VI) alignment, compares VI to the currently top-ranking models for Subtask 2 and demonstrates that these can be outperformed if we optimize VI dimensionality. We demonstrate that differences in performance can largely be attributed to model-specific sources of noise, and we reveal a strong relationship between dimensionality and frequency-induced noise in VI alignment. Our results suggest that lexical semantic change models integrating vector space alignment should pay more attention to the role of the dimensionality parameter.

2019

pdf bib
SURel: A Gold Standard for Incorporating Meaning Shifts into Term Extraction
Anna Hätty | Dominik Schlechtweg | Sabine Schulte im Walde
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

We introduce SURel, a novel dataset with human-annotated meaning shifts between general-language and domain-specific contexts. We show that meaning shifts of term candidates cause errors in term extraction, and demonstrate that the SURel annotation reflects these errors. Furthermore, we illustrate that SURel enables us to assess optimisations of term extraction techniques when incorporating meaning shifts.

pdf
A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains
Dominik Schlechtweg | Anna Hätty | Marco Del Tredici | Sabine Schulte im Walde
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We perform an interdisciplinary large-scale evaluation for detecting lexical semantic divergences in a diachronic and in a synchronic task: semantic sense changes across time, and semantic sense changes across domains. Our work addresses the superficialness and lack of comparison in assessing models of diachronic lexical change, by bringing together and extending benchmark models on a common state-of-the-art evaluation task. In addition, we demonstrate that the same evaluation task and modelling approaches can successfully be utilised for the synchronic detection of domain-specific sense divergences in the field of term extraction.

pdf
You Shall Know a User by the Company It Keeps: Dynamic Representations for Social Media Users in NLP
Marco Del Tredici | Diego Marcheggiani | Sabine Schulte im Walde | Raquel Fernández
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Information about individuals can help to better understand what they say, particularly in social media where texts are short. Current approaches to modelling social media users pay attention to their social connections, but exploit this information in a static way, treating all connections uniformly. This ignores the fact, well known in sociolinguistics, that an individual may be part of several communities which are not equally relevant in all communicative situations. We present a model based on Graph Attention Networks that captures this observation. It dynamically explores the social graph of a user, computes a user representation given the most relevant connections for a target task, and combines it with linguistic information to make a prediction. We apply our model to three different tasks, evaluate it against alternative models, and analyse the results extensively, showing that it significantly outperforms other current methods.

pdf
Distributional Interaction of Concreteness and Abstractness in Verb–Noun Subcategorisation
Diego Frassinelli | Sabine Schulte im Walde
Proceedings of the 13th International Conference on Computational Semantics - Short Papers

In recent years, both cognitive and computational research has provided empirical analyses of contextual co-occurrence of concrete and abstract words, partially resulting in inconsistent pictures. In this work we provide a more fine-grained description of the distributional nature in the corpus-based interaction of verbs and nouns within subcategorisation, by investigating the concreteness of verbs and nouns that are in a specific syntactic relationship with each other, i.e., subject, direct object, and prepositional object. Overall, our experiments show consistent patterns in the distributional representation of subcategorising and subcategorised concrete and abstract words. At the same time, the studies reveal empirical evidence why contextual abstractness represents a valuable indicator for automatic non-literal language identification.

pdf
Second-order Co-occurrence Sensitivity of Skip-Gram with Negative Sampling
Dominik Schlechtweg | Cennet Oguz | Sabine Schulte im Walde
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

We simulate first- and second-order context overlap and show that Skip-Gram with Negative Sampling is similar to Singular Value Decomposition in capturing second-order co-occurrence information, while Pointwise Mutual Information is agnostic to it. We support the results with an empirical study finding that the models react differently when provided with additional second-order information. Our findings reveal a basic property of Skip-Gram with Negative Sampling and point towards an explanation of its success on a variety of tasks.

2018

pdf
Integrating Predictions from Neural-Network Relation Classifiers into Coreference and Bridging Resolution
Ina Roesiger | Maximilian Köper | Kim Anh Nguyen | Sabine Schulte im Walde
Proceedings of the First Workshop on Computational Models of Reference, Anaphora and Coreference

Cases of coreference and bridging resolution often require knowledge about semantic relations between anaphors and antecedents. We suggest state-of-the-art neural-network classifiers trained on relation benchmarks to predict and integrate likelihoods for relations. Two experiments with representations differing in noise and complexity improve our bridging but not our coreference resolver.

pdf
Fine-Grained Termhood Prediction for German Compound Terms Using Neural Networks
Anna Hätty | Sabine Schulte im Walde
Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)

Automatic term identification and investigating the understandability of terms in a specialized domain are often treated as two separate lines of research. We propose a combined approach for this matter, by defining fine-grained classes of termhood and framing a classification task. The classes reflect tiers of a term’s association to a domain. The new setup is applied to German closed compounds as term candidates in the domain of cooking. For the prediction of the classes, we compare several neural network architectures and also take salient information about the compounds’ components into account. We show that applying a similar class distinction to the compounds’ components and propagating this information within the network improves the compound class prediction results.

pdf
Projecting Embeddings for Domain Adaption: Joint Modeling of Sentiment Analysis in Diverse Domains
Jeremy Barnes | Roman Klinger | Sabine Schulte im Walde
Proceedings of the 27th International Conference on Computational Linguistics

Domain adaptation for sentiment analysis is challenging due to the fact that supervised classifiers are very sensitive to changes in domain. The two most prominent approaches to this problem are structural correspondence learning and autoencoders. However, they either require long training times or suffer greatly on highly divergent domains. Inspired by recent advances in cross-lingual sentiment analysis, we provide a novel perspective and cast the domain adaptation problem as an embedding projection task. Our model takes as input two mono-domain embedding spaces and learns to project them to a bi-domain space, which is jointly optimized to (1) project across domains and to (2) predict sentiment. We perform domain adaptation experiments on 20 source-target domain pairs for sentiment classification and report novel state-of-the-art results on 11 domain pairs, including the Amazon domain adaptation datasets and SemEval 2013 and 2016 datasets. Our analysis shows that our model performs comparably to state-of-the-art approaches on domains that are similar, while performing significantly better on highly divergent domains. Our code is available at https://github.com/jbarnesspain/domain_blse

pdf
Bilingual Sentiment Embeddings: Joint Projection of Sentiment Across Languages
Jeremy Barnes | Roman Klinger | Sabine Schulte im Walde
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Sentiment analysis in low-resource languages suffers from a lack of annotated corpora to estimate high-performing models. Machine translation and bilingual word embeddings provide some relief through cross-lingual sentiment approaches. However, they either require large amounts of parallel data or do not sufficiently capture sentiment information. We introduce Bilingual Sentiment Embeddings (BLSE), which jointly represent sentiment information in a source and target language. This model only requires a small bilingual lexicon, a source-language corpus annotated for sentiment, and monolingual word embeddings for each language. We perform experiments on three language combinations (Spanish, Catalan, Basque) for sentence-level cross-lingual sentiment classification and find that our model significantly outperforms state-of-the-art methods on four out of six experimental setups, as well as capturing complementary information to machine translation. Our analysis of the resulting embedding space provides evidence that it represents sentiment information in the resource-poor target language without any annotated data in that language.

pdf
Analogies in Complex Verb Meaning Shifts: the Effect of Affect in Semantic Similarity Models
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We present a computational model to detect and distinguish analogies in meaning shifts between German base and complex verbs. In contrast to corpus-based studies, a novel dataset demonstrates that “regular” shifts represent the smallest class. Classification experiments relying on a standard similarity model successfully distinguish between four types of shifts, with verb classes boosting the performance, and affective features for abstractness, emotion and sentiment representing the most salient indicators.

pdf
Diachronic Usage Relatedness (DURel): A Framework for the Annotation of Lexical Semantic Change
Dominik Schlechtweg | Sabine Schulte im Walde | Stefanie Eckmann
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We propose a framework that extends synchronic polysemy annotation to diachronic changes in lexical meaning, to counteract the lack of resources for evaluating computational models of lexical semantic change. Our framework exploits an intuitive notion of semantic relatedness, and distinguishes between innovative and reductive meaning changes with high inter-annotator agreement. The resulting test set for German comprises ratings from five annotators for the relatedness of 1,320 use pairs across 22 target words.

pdf
Introducing Two Vietnamese Datasets for Evaluating Semantic Models of (Dis-)Similarity and Relatedness
Kim Anh Nguyen | Sabine Schulte im Walde | Ngoc Thang Vu
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We present two novel datasets for the low-resource language Vietnamese to assess models of semantic similarity: ViCon comprises pairs of synonyms and antonyms across word classes, thus offering data to distinguish between similarity and dissimilarity. ViSim-400 provides degrees of similarity across five semantic relations, as rated by human judges. The two datasets are verified through standard co-occurrence and neural network models, showing results comparable to the respective English datasets.

pdf
A Laypeople Study on Terminology Identification across Domains and Task Definitions
Anna Hätty | Sabine Schulte im Walde
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

This paper introduces a new dataset of term annotation. Given that even experts vary significantly in their understanding of termhood, and that term identification is mostly performed as a binary task, we offer a novel perspective to explore the common, natural understanding of what constitutes a term: Laypeople annotate single-word and multi-word terms, across four domains and across four task definitions. Analyses based on inter-annotator agreement offer insights into differences in term specificity, term granularity and subtermhood.

pdf bib
Combining Abstractness and Language-specific Theoretical Indicators for Detecting Non-Literal Usage of Estonian Particle Verbs
Eleri Aedmaa | Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

This paper presents two novel datasets and a random-forest classifier to automatically predict literal vs. non-literal language usage for a highly frequent type of multi-word expression in a low-resource language, i.e., Estonian. We demonstrate the value of language-specific indicators induced from theoretical linguistic research, which outperform a high majority baseline when combined with language-independent features of non-literal language (such as abstractness).

pdf
Assessing Meaning Components in German Complex Verbs: A Collection of Source-Target Domains and Directionality
Sabine Schulte im Walde | Maximilian Köper | Sylvia Springorum
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics

This paper presents a collection to assess meaning components in German complex verbs, which frequently undergo meaning shifts. We use a novel strategy to obtain source and target domain characterisations via sentence generation rather than sentence annotation. A selection of arrows adds spatial directional information to the generated contexts. We provide a broad qualitative description of the dataset, and a series of standard classification experiments verifies the quantitative reliability of the presented resource. The setup for collecting the meaning components is applicable also to other languages, regarding complex verbs as well as other language-specific targets that involve meaning shifts.

pdf
Quantitative Semantic Variation in the Contexts of Concrete and Abstract Words
Daniela Naumann | Diego Frassinelli | Sabine Schulte im Walde
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics

Across disciplines, researchers are eager to gain insight into empirical features of abstract vs. concrete concepts. In this work, we provide a detailed characterisation of the distributional nature of abstract and concrete words across 16,620 English nouns, verbs and adjectives. Specifically, we investigate the following questions: (1) What is the distribution of concreteness in the contexts of concrete and abstract target words? (2) What are the differences between concrete and abstract words in terms of contextual semantic diversity? (3) How does the entropy of concrete and abstract word contexts differ? Overall, our studies show consistent differences in the distributional representation of concrete and abstract words, thus challenging existing theories of cognition and providing a more fine-grained description of their nature.

2017

pdf
Factoring Ambiguity out of the Prediction of Compositionality for German Multi-Word Expressions
Stefan Bott | Sabine Schulte im Walde
Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017)

Ambiguity represents an obstacle for distributional semantic models(DSMs), which typically subsume the contexts of all word senses within one vector. While individual vector space approaches have been concerned with sense discrimination (e.g., Schütze 1998, Erk 2009, Erk and Pado 2010), such discrimination has rarely been integrated into DSMs across semantic tasks. This paper presents a soft-clustering approach to sense discrimination that filters sense-irrelevant features when predicting the degrees of compositionality for German noun-noun compounds and German particle verbs.

pdf
Complex Verbs are Different: Exploring the Visual Modality in Multi-Modal Models to Predict Compositionality
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017)

This paper compares a neural network DSM relying on textual co-occurrences with a multi-modal model integrating visual information. We focus on nominal vs. verbal compounds, and zoom into lexical, empirical and perceptual target properties to explore the contribution of the visual modality. Our experiments show that (i) visual features contribute differently for verbs than for nouns, and (ii) images complement textual information, if (a) the textual modality by itself is poor and appropriate image subsets are used, or (b) the textual modality by itself is rich and large (potentially noisy) images are added.

pdf
Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications

Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for larger units by propagating abstractness to verb-noun pairs which lead to better metaphor detection iii) we overcome the limitation of learning a single rating per word and show that multi-sense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3million English words and multi-words as well as automatically created sense specific abstractness ratings

pdf bib
Assessing State-of-the-Art Sentiment Models on State-of-the-Art Sentiment Datasets
Jeremy Barnes | Roman Klinger | Sabine Schulte im Walde
Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

There has been a good amount of progress in sentiment analysis over the past 10 years, including the proposal of new methods and the creation of benchmark datasets. In some papers, however, there is a tendency to compare models only on one or two datasets, either because of time restraints or because the model is tailored to a specific task. Accordingly, it is hard to understand how well a certain model generalizes across different tasks and datasets. In this paper, we contribute to this situation by comparing several models on six different benchmarks, which belong to different domains and additionally have different levels of granularity (binary, 3-class, 4-class and 5-class). We show that Bi-LSTMs perform well across datasets and that both LSTMs and Bi-LSTMs are particularly good at fine-grained sentiment tasks (i.e., with more than two classes). Incorporating sentiment information into word embeddings during training gives good results for datasets that are lexically similar to the training data. With our experiments, we contribute to a better understanding of the performance of different model architectures on different data sets. Consequently, we detect novel state-of-the-art results on the SenTube datasets.

pdf
Exploring Soft-Clustering for German (Particle) Verbs across Frequency Ranges
Moritz Wittmann | Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 12th International Conference on Computational Semantics (IWCS) — Short papers

pdf bib
Exploring Multi-Modal Text+Image Models to Distinguish between Abstract and Concrete Nouns
Sai Abishek Bhaskar | Maximilian Köper | Sabine Schulte Im Walde | Diego Frassinelli
Proceedings of the IWCS workshop on Foundations of Situated and Multimodal Communication

pdf
Hierarchical Embeddings for Hypernymy Detection and Directionality
Kim Anh Nguyen | Maximilian Köper | Sabine Schulte im Walde | Ngoc Thang Vu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We present a novel neural model HyperVec to learn hierarchical embeddings for hypernymy detection and directionality. While previous embeddings have shown limitations on prototypical hypernyms, HyperVec represents an unsupervised measure where embeddings are learned in a specific order and capture the hypernym–hyponym distributional hierarchy. Moreover, our model is able to generalize over unseen hypernymy pairs, when using only small sets of training data, and by mapping to other languages. Results on benchmark datasets show that HyperVec outperforms both state-of-the-art unsupervised measures and embedding models on hypernymy detection and directionality, and on predicting graded lexical entailment.

pdf
Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network
Kim Anh Nguyen | Sabine Schulte im Walde | Ngoc Thang Vu
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

Distinguishing between antonyms and synonyms is a key task to achieve high performance in NLP systems. While they are notoriously difficult to distinguish by distributional co-occurrence models, pattern-based methods have proven effective to differentiate between the relations. In this paper, we present a novel neural network model AntSynNET that exploits lexico-syntactic patterns from syntactic parse trees. In addition to the lexical and syntactic information, we successfully integrate the distance between the related words along the syntactic path as a new pattern feature. The results from classification experiments show that AntSynNET improves the performance over prior pattern-based methods.

pdf
Applying Multi-Sense Embeddings for German Verbs to Determine Semantic Relatedness and to Detect Non-Literal Language
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Up to date, the majority of computational models still determines the semantic relatedness between words (or larger linguistic units) on the type level. In this paper, we compare and extend multi-sense embeddings, in order to model and utilise word senses on the token level. We focus on the challenging class of complex verbs, and evaluate the model variants on various semantic tasks: semantic classification; predicting compositionality; and detecting non-literal language usage. While there is no overall best model, all models significantly outperform a word2vec single-sense skip baseline, thus demonstrating the need to distinguish between word senses in a distributional semantic model.

pdf
Addressing Problems across Linguistic Levels in SMT: Combining Approaches to Model Morphology, Syntax and Lexical Choice
Marion Weller-Di Marco | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

Many errors in phrase-based SMT can be attributed to problems on three linguistic levels: morphological complexity in the target language, structural differences and lexical choice. We explore combinations of linguistically motivated approaches to address these problems in English-to-German SMT and show that they are complementary to one another, but also that the popular verbal pre-ordering can cause problems on the morphological and lexical level. A discriminative classifier can overcome these problems, in particular when enriching standard lexical features with features geared towards verbal inflection.

pdf
Evaluating the Reliability and Interaction of Recursively Used Feature Classes for Terminology Extraction
Anna Hätty | Michael Dorna | Sabine Schulte im Walde
Proceedings of the Student Research Workshop at the 15th Conference of the European Chapter of the Association for Computational Linguistics

Feature design and selection is a crucial aspect when treating terminology extraction as a machine learning classification problem. We designed feature classes which characterize different properties of terms based on distributions, and propose a new feature class for components of term candidates. By using random forests, we infer optimal features which are later used to build decision tree classifiers. We evaluate our method using the ACL RD-TEC dataset. We demonstrate the importance of the novel feature class for downgrading termhood which exploits properties of term components. Furthermore, our classification suggests that the identification of reliable term candidates should be performed successively, rather than just once.

pdf
German in Flux: Detecting Metaphoric Change via Word Entropy
Dominik Schlechtweg | Stefanie Eckmann | Enrico Santus | Sabine Schulte im Walde | Daniel Hole
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)

This paper explores the information-theoretic measure entropy to detect metaphoric change, transferring ideas from hypernym detection to research on language change. We build the first diachronic test set for German as a standard for metaphoric change annotation. Our model is unsupervised, language-independent and generalizable to other processes of semantic change.

2016

pdf
Graph-based Clustering of Synonym Senses for German Particle Verbs
Moritz Wittmann | Marion Weller-Di Marco | Sabine Schulte im Walde
Proceedings of the 12th Workshop on Multiword Expressions

pdf
Modeling Complement Types in Phrase-Based SMT
Marion Weller-Di Marco | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers

pdf
GhoSt-PV: A Representative Gold Standard of German Particle Verbs
Stefan Bott | Nana Khvtisavrishvili | Max Kisselew | Sabine Schulte im Walde
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V)

German particle verbs represent a frequent type of multi-word-expression that forms a highly productive paradigm in the lexicon. Similarly to other multi-word expressions, particle verbs exhibit various levels of compositionality. One of the major obstacles for the study of compositionality is the lack of representative gold standards of human ratings. In order to address this bottleneck, this paper presents such a gold standard data set containing 400 randomly selected German particle verbs. It is balanced across several particle types and three frequency bands, and accomplished by human ratings on the degree of semantic compositionality.

pdf
Automatic Semantic Classification of German Preposition Types: Comparing Hard and Soft Clustering Approaches across Features
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Integrating Distributional Lexical Contrast into Word Embeddings for Antonym-Synonym Distinction
Kim Anh Nguyen | Sabine Schulte im Walde | Ngoc Thang Vu
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Improving Zero-Shot-Learning for German Particle Verbs by using Training-Space Restrictions and Local Scaling
Maximilian Köper | Sabine Schulte im Walde | Max Kisselew | Sebastian Padó
Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics

pdf
The Role of Modifier and Head Properties in Predicting the Compositionality of English and German Noun-Noun Compounds: A Vector-Space Perspective
Sabine Schulte im Walde | Anna Hätty | Stefan Bott
Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics

pdf
Distinguishing Literal and Non-Literal Usage of German Particle Verbs
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Visualisation and Exploration of High-Dimensional Distributional Features in Lexical Semantic Classification
Maximilian Köper | Melanie Zaiß | Qi Han | Steffen Koch | Sabine Schulte im Walde
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Vector space models and distributional information are widely used in NLP. The models typically rely on complex, high-dimensional objects. We present an interactive visualisation tool to explore salient lexical-semantic features of high-dimensional word objects and word similarities. Most visualisation tools provide only one low-dimensional map of the underlying data, so they are not capable of retaining the local and the global structure. We overcome this limitation by providing an additional trust-view to obtain a more realistic picture of the actual object distances. Additional tool options include the reference to a gold standard classification, the reference to a cluster analysis as well as listing the most salient (common) features for a selected subset of the words.

pdf
GhoSt-NN: A Representative Gold Standard of German Noun-Noun Compounds
Sabine Schulte im Walde | Anna Hätty | Stefan Bott | Nana Khvtisavrishvili
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper presents a novel gold standard of German noun-noun compounds (Ghost-NN) including 868 compounds annotated with corpus frequencies of the compounds and their constituents, productivity and ambiguity of the constituents, semantic relations between the constituents, and compositionality ratings of compound-constituent pairs. Moreover, a subset of the compounds containing 180 compounds is balanced for the productivity of the modifiers (distinguishing low/mid/high productivity) and the ambiguity of the heads (distinguishing between heads with 1, 2 and >2 senses

pdf
Automatically Generated Affective Norms of Abstractness, Arousal, Imageability and Valence for 350 000 German Lemmas
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper presents a collection of 350,000 German lemmatised words, rated on four psycholinguistic affective attributes. All ratings were obtained via a supervised learning algorithm that can automatically calculate a numerical rating of a word. We applied this algorithm to abstractness, arousal, imageability and valence. Comparison with human ratings reveals high correlation across all rating types. The full resource is publically available at: http://www.ims.uni-stuttgart.de/data/affective_norms/

pdf
Neural-based Noise Filtering from Word Embeddings
Kim Anh Nguyen | Sabine Schulte im Walde | Ngoc Thang Vu
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Word embeddings have been demonstrated to benefit NLP tasks impressively. Yet, there is room for improvements in the vector representations, because current word embeddings typically contain unnecessary information, i.e., noise. We propose two novel models to improve word embeddings by unsupervised learning, in order to yield word denoising embeddings. The word denoising embeddings are obtained by strengthening salient information and weakening noise in the original word embeddings, based on a deep feed-forward neural network filter. Results from benchmark tasks show that the filtered word denoising embeddings outperform the original word embeddings.

2015

pdf
Exploiting Fine-grained Syntactic Transfer Features to Predict the Compositionality of German Particle Verbs
Stefan Bott | Sabine Schulte im Walde
Proceedings of the 11th International Conference on Computational Semantics

pdf
Multilingual Reliability and “Semantic” Structure of Continuous Word Spaces
Maximilian Köper | Christian Scheible | Sabine Schulte im Walde
Proceedings of the 11th International Conference on Computational Semantics

pdf
How to Account for Idiomatic German Support Verb Constructions in Statistical Machine Translation
Fabienne Cap | Manju Nirmal | Marion Weller | Sabine Schulte im Walde
Proceedings of the 11th Workshop on Multiword Expressions

pdf
Predicting Prepositions for SMT
Marion Weller | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the Ninth Workshop on Syntax, Semantics and Structure in Statistical Translation

pdf
Target-Side Generation of Prepositions for SMT
Marion Weller | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the 18th Annual Conference of the European Association for Machine Translation

2014

pdf
Automatic Extraction of Synonyms for German Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature
Moritz Wittmann | Marion Weller | Sabine Schulte im Walde
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply re-ordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.

pdf
A Rank-based Distance Measure to Detect Polysemy and to Determine Salient Vector-Space Features for German Prepositions
Maximilian Köper | Sabine Schulte im Walde
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper addresses vector space models of prepositions, a notoriously ambiguous word class. We propose a rank-based distance measure to explore the vector-spatial properties of the ambiguous objects, focusing on two research tasks: (i) to distinguish polysemous from monosemous prepositions in vector space; and (ii) to determine salient vector-space features for a classification of preposition senses. The rank-based measure predicts the polysemy vs. monosemy of prepositions with a precision of up to 88%, and suggests preposition-subcategorised nouns as more salient preposition features than preposition-subcategorising verbs.

pdf
Fuzzy V-Measure - An Evaluation Method for Cluster Analyses of Ambiguous Data
Jason Utt | Sylvia Springorum | Maximilian Köper | Sabine Schulte im Walde
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper discusses an extension of the V-measure (Rosenberg and Hirschberg, 2007), an entropy-based cluster evaluation metric. While the original work focused on evaluating hard clusterings, we introduce the Fuzzy V-measure which can be used on data that is inherently ambiguous. We perform multiple analyses varying the sizes and ambiguity rates and show that while entropy-based measures in general tend to suffer when ambiguity increases, a measure with desirable properties can be derived from these in a straightforward manner.

pdf
Optimizing a Distributional Semantic Model for the Prediction of German Particle Verb Compositionality
Stefan Bott | Sabine Schulte im Walde
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In the work presented here we assess the degree of compositionality of German Particle Verbs with a Distributional Semantics Model which only relies on word window information and has no access to syntactic information as such. Our method only takes the lexical distributional distance between the Particle Verb to its Base Verb as a predictor for compositionality. We show that the ranking of distributional similarity correlates significantly with the ranking of human judgements on semantic compositionality for a series of Particle Verbs and the Base Verbs they are derived from. We also investigate the influence of further linguistic factors, such as the ambiguity and the overall frequency of the verbs and a syntactically separate occurrences of verbs and particles that causes difficulties for the correct lemmatization of Particle Verbs. We analyse in how far these factors may influence the success with which the compositionality of the Particle Verbs may be predicted.

pdf
Combining Word Patterns and Discourse Markers for Paradigmatic Relation Classification
Michael Roth | Sabine Schulte im Walde
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Contrasting Syntagmatic and Paradigmatic Relations: Insights from Distributional Semantic Models
Gabriella Lapesa | Stefan Evert | Sabine Schulte im Walde
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)

pdf
Syntactic Transfer Patterns of German Particle Verbs and their Impact on Lexical Semantics
Stefan Bott | Sabine Schulte im Walde
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)

pdf
Chasing Hypernyms in Vector Spaces with Entropy
Enrico Santus | Alessandro Lenci | Qin Lu | Sabine Schulte im Walde
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers

pdf
Using noun class information to model selectional preferences for translating prepositions in SMT
Marion Weller | Sabine Schulte im Walde | Alexander Fraser
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

Translating prepositions is a difficult and under-studied problem in SMT. We present a novel method to improve the translation of prepositions by using noun classes to model their selectional preferences. We compare three variants of noun class information: (i) classes induced from the lexical resource GermaNet or obtained from clusterings based on either (ii) window information or (iii) syntactic features. Furthermore, we experiment with PP rule generalization. While we do not significantly improve over the baseline, our results demonstrate that (i) integrating selectional preferences as rigid class annotation in the parse tree is sub-optimal, and that (ii) clusterings based on window co-occurrence are more robust than syntax-based clusters or GermaNet classes for the task of modeling selectional preferences.

pdf
Feature Norms of German Noun Compounds
Stephen Roller | Sabine Schulte im Walde
Proceedings of the 10th Workshop on Multiword Expressions (MWE)

pdf bib
Modelling Regular Subcategorization Changes in German Particle Verbs
Stefan Bott | Sabine Schulte im Walde
Proceedings of the First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014)

pdf
Distinguishing Degrees of Compositionality in Compound Splitting for Statistical Machine Translation
Marion Weller | Fabienne Cap | Stefan Müller | Sabine Schulte im Walde | Alexander Fraser
Proceedings of the First Workshop on Computational Approaches to Compound Analysis (ComAComA 2014)

pdf
A Database of Paradigmatic Semantic Relation Pairs for German Nouns, Verbs, and Adjectives
Silke Scheible | Sabine Schulte im Walde
Proceedings of Workshop on Lexical and Grammatical Resources for Language Processing

2013

pdf
Exploring Vector Space Models to Predict the Compositionality of German Noun-Noun Compounds
Sabine Schulte im Walde | Stefan Müller | Stefan Roller
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf
A Multimodal LDA Model integrating Textual, Cognitive and Visual Modalities
Stephen Roller | Sabine Schulte im Walde
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Regular Meaning Shifts in German Particle Verbs: A Case Study
Sylvia Springorum | Jason Utt | Sabine Schulte im Walde
Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Long Papers

pdf
The (Un)expected Effects of Applying Standard Cleansing Models to Human Ratings on Compositionality
Stephen Roller | Sabine Schulte im Walde | Silke Scheible
Proceedings of the 9th Workshop on Multiword Expressions

pdf
Potential and limits of distributional approaches for semantic relatedness
Sabine Schulte in Walde
Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora

pdf
Uncovering Distributional Differences between Synonyms and Antonyms in a Word Space Model
Silke Scheible | Sabine Schulte im Walde | Sylvia Springorum
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Detecting Polysemy in Hard and Soft Cluster Analyses of German Preposition Vector Spaces
Sylvia Springorum | Sabine Schulte im Walde | Jason Utt
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf
Using subcategorization knowledge to improve case prediction for translation to German
Marion Weller | Alexander Fraser | Sabine Schulte im Walde
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf
Modeling Regular Polysemy: A Study on the Semantic Classification of Catalan Adjectives
Gemma Boleda | Sabine Schulte im Walde | Toni Badia
Computational Linguistics, Volume 38, Issue 3 - September 2012

pdf
Automatic classification of German an particle verbs
Sylvia Springorum | Sabine Schulte im Walde | Antje Roßdeutscher
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The current study works at the interface of theoretical and computational linguistics to explore the semantic properties of an particle verbs, i.e., German particle verbs with the particle an. Based on a thorough analysis of the particle verbs from a theoretical point of view, we identified empirical features and performed an automatic semantic classification. A focus of the study was on the mutual profit of theoretical and empirical perspectives with respect to salient semantic properties of the an particle verbs: (a) how can we transform the theoretical insights into empirical, corpus-based features, (b) to what extent can we replicate the theoretical classification by a machine learning approach, and (c) can the computational analysis in turn deepen our insights to the semantic properties of the particle verbs? The best classification result of 70% correct class assignments was reached through a GermaNet-based generalization of direct object nouns plus a prepositional phrase feature. These particle verb features in combination with a detailed analysis of the results at the same time confirmed and enlarged our knowledge about salient properties.

pdf
Association Norms of German Noun Compounds
Sabine Schulte im Walde | Susanne Borgwaldt | Ronny Jauch
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper introduces association norms of German noun compounds as a lexical semantic resource for cognitive and computational linguistics research on compositionality. Based on an existing database of German noun compounds, we collected human associations to the compounds and their constituents within a web experiment. The current study describes the collection process and a part-of-speech analysis of the association resource. In addition, we demonstrate that the associations provide insight into the semantic properties of the compounds, and perform a case study that predicts the degree of compositionality of the experiment compound nouns, as relying on the norms. Applying a comparatively simple measure of association overlap, we reach a Spearman rank correlation coefficient of rs=0.5228; p<000001, when comparing our predictions with human judgements.

2010

pdf
BabyExp: Constructing a Huge Multimodal Resource to Acquire Commonsense Knowledge Like Children Do
Massimo Poesio | Marco Baroni | Oswald Lanz | Alessandro Lenci | Alexandros Potamianos | Hinrich Schütze | Sabine Schulte im Walde | Luca Surian
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

There is by now widespread agreement that the most realistic way to construct the large-scale commonsense knowledge repositories required by natural language and artificial intelligence applications is by letting machines learn such knowledge from large quantities of data, like humans do. A lot of attention has consequently been paid to the development of increasingly sophisticated machine learning algorithms for knowledge extraction. However, the nature of the input that humans are exposed to while learning commonsense knowledge has received much less attention. The BabyExp project is collecting very dense audio and video recordings of the first 3 years of life of a baby. The corpus constructed in this way will be transcribed with automated techniques and made available to the research community. Moreover, techniques to extract commonsense conceptual knowledge incrementally from these multimodal data are also being explored within the project. The current paper describes BabyExp in general, and presents pilot studies on the feasibility of the automated audio and video transcriptions.

pdf
Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters
Sabine Schulte im Walde
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate--noun relationships by latent semantic clusters (Rooth et al., 1999); and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate--noun selectional preferences by WordNet classes (Schulte im Walde et al., 2008). Concerning the distributional approach, we were interested not only in how well the model describes selectional preferences, but moreover which second-order properties are most salient. For example, a typical direct object of the verb 'drink' is usually fluid, might be hot or cold, can be bought, might be bottled, etc. The general question we ask is: what characterises the predicate's restrictions to the semantic realisation of its complements? Our second interest lies in the actual comparison of the models: How does a very simple distributional model compare to much more complex approaches, and which representation of selectional preferences is more appropriate, using (i) second-order properties, (ii) an implicit generalisation of nouns (by clusters), or (iii) an explicit generalisation of nouns by WordNet classes within clusters? We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline.

2009

pdf bib
Proceedings of the ACL-IJCNLP 2009 Software Demonstrations
Gary Geunbae Lee | Sabine Schulte im Walde
Proceedings of the ACL-IJCNLP 2009 Software Demonstrations

2008

pdf
Evaluating a German Sketch Grammar: A Case Study on Noun Phrase Case
Kremena Ivanova | Ulrich Heid | Sabine Schulte im Walde | Adam Kilgarriff | Jan Pomikálek
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Word sketches are part of the Sketch Engine corpus query system. They represent automatic, corpus-derived summaries of the words’ grammatical and collocational behaviour. Besides the corpus itself, word sketches require a sketch grammar, a regular expression-based shallow grammar over the part-of-speech tags, to extract evidence for the properties of the targeted words from the corpus. The paper presents a sketch grammar for German, a language which is not strictly configurational and which shows a considerable amount of case syncretism, and evaluates its accuracy, which has not been done for other sketch grammars. The evaluation focuses on NP case as a crucial part of the German grammar. We present various versions of NP definitions, so demonstrating the influence of grammar detail on precision and recall.

pdf
Corpus Co-Occurrence, Dictionary and Wikipedia Entries as Resources for Semantic Relatedness Information
Michael Roth | Sabine Schulte im Walde
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Distributional, corpus-based descriptions have frequently been applied to model aspects of word meaning. However, distributional models that use corpus data as their basis have one well-known disadvantage: even though the distributional features based on corpus co-occurrence were often successful in capturing meaning aspects of the words to be described, they generally fail to capture those meaning aspects that refer to world knowledge, because coherent texts tend not to provide redundant information that is presumably available knowledge. The question we ask in this paper is whether dictionary and encyclopaedic resources might complement the distributional information in corpus data, and provide world knowledge that is missing in corpora. As test case for meaning aspects, we rely on a collection of semantic associates to German verbs and nouns. Our results indicate that a combination of the knowledge resources should be helpful in work on distributional descriptions.

pdf bib
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics
Ron Artstein | Gemma Boleda | Frank Keller | Sabine Schulte im Walde
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics

pdf
Combining EM Training and the MDL Principle for an Automatic Verb Classification Incorporating Selectional Preferences
Sabine Schulte im Walde | Christian Hying | Christian Scheible | Helmut Schmid
Proceedings of ACL-08: HLT

2007

pdf
Modelling Polysemy in Adjective Classes by Multi-Label Classification
Gemma Boleda | Sabine Schulte im Walde | Toni Badia
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf
Characterizing Response Types and Revealing Noun Ambiguity in German Association Norms
Alissa Melinger | Sabine Schulte im Walde | Andrea Weber
Proceedings of the Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together

pdf
Can Human Verb Associations Help Identify Salient Features for Semantic Verb Classification?
Sabine Schulte im Walde
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)

pdf bib
Experiments on the Automatic Induction of German Semantic Verb Classes
Sabine Schulte im Walde
Computational Linguistics, Volume 32, Number 2, June 2006

pdf
Human Verb Associations as the Basis for Gold Standard Verb Classes: Validation against GermaNet and FrameNet
Sabine Schulte im Walde
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

We describe a gold standard for semantic verb classes which is based on human associations to verbs. The associations were collected in a web experiment and then applied as verb features in a hierarchical cluster analysis. We claim that the resulting classes represent a theory-independent gold standard classification which covers a variety of semantic verb relations, and whose features can be used to guide the feature selection in automatic processes. To evaluate our claims, the association-based classification is validated against two standard approaches to semantic verb classes, GermaNet and FrameNet.

2005

pdf
Morphology vs. Syntax in Adjective Class Acquisition
Gemma Boleda | Toni Badia | Sabine Schulte im Walde
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition

pdf
Identifying Semantic Relations and Functional Properties of Human Verb Associations
Sabine Schulte im Walde | Alissa Melinger
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf
Identification, Quantitative Description, and Preliminary Distributional Analysis of German Particle Verbs
Sabine Schulte im Walde
Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries

2003

pdf
Experiments on the Choice of Features for Learning Verb Classes
Sabine Schulte im Walde
10th Conference of the European Chapter of the Association for Computational Linguistics

2002

pdf
A Subcategorisation Lexicon for German Verbs induced from a Lexicalised PCFG
Sabine Schulte im Walde
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf
Acquiring Lexical Knowledge for Anaphora Resolution
Massimo Poesio | Tomonori Ishikawa | Sabine Schulte im Walde | Renata Vieira
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf
Spectral Clustering for German Verbs
Chris Brew | Sabine Schulte im Walde
Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002)

pdf
Inducing German Semantic Verb Classes from Purely Syntactic Subcategorisation Information
Sabine Schulte im Walde | Chris Brew
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

2000

pdf
Robust German Noun Chunking With a Probabilistic Context-Free Grammar
Helmut Schmid | Sabine Schulte im Walde
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics

pdf
Clustering Verbs Semantically According to their Alternation Behaviour
Sabine Schulte im Walde
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics

Search
Co-authors