Marco Kuhlmann


2024

pdf bib
Properties and Challenges of LLM-Generated Explanations
Jenny Kunz | Marco Kuhlmann
Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing

The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task-specific data sets.However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs.The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.As the pre-training corpus includes a large amount of human-written explanations “in the wild”, we hypothesise that LLMs adopt common properties of human explanations.By analysing the outputs for a multi-domain instruction fine-tuning data set, we find that generated explanations show selectivity and contain illustrative elements, but less frequently are subjective or misleading.We discuss reasons and consequences of the properties’ presence or absence. In particular, we outline positive and negative implications depending on the goals and user groups of the self-rationalising system.

2023

pdf
On the Generalization Ability of Retrieval-Enhanced Transformers
Tobias Norlund | Ehsan Doostmohammadi | Richard Johansson | Marco Kuhlmann
Findings of the Association for Computational Linguistics: EACL 2023

Recent work on the Retrieval-Enhanced Transformer (RETRO) model has shown impressive results: off-loading memory from trainable weights to a retrieval database can significantly improve language modeling and match the performance of non-retrieval models that are an order of magnitude larger in size. It has been suggested that at least some of this performance gain is due to non-trivial generalization based on both model weights and retrieval. In this paper, we try to better understand the relative contributions of these two components. We find that the performance gains from retrieval to a very large extent originate from overlapping tokens between the database and the test data, suggesting less of non-trivial generalization than previously assumed. More generally, our results point to the challenges of evaluating the generalization of retrieval-augmented language models such as RETRO, as even limited token overlap may significantly decrease test-time loss. We release our code and model at https://github.com/TobiasNorlund/retro

pdf
Surface-Based Retrieval Reduces Perplexity of Retrieval-Augmented Language Models
Ehsan Doostmohammadi | Tobias Norlund | Marco Kuhlmann | Richard Johansson
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Augmenting language models with a retrieval mechanism has been shown to significantly improve their performance while keeping the number of parameters low. Retrieval-augmented models commonly rely on a semantic retrieval mechanism based on the similarity between dense representations of the query chunk and potential neighbors. In this paper, we study the state-of-the-art Retro model and observe that its performance gain is better explained by surface-level similarities, such as token overlap. Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity. As full BM25 retrieval can be computationally costly for large datasets, we also apply it in a re-ranking scenario, gaining part of the perplexity reduction with minimal computational overhead.

pdf
Bridging the Resource Gap: Exploring the Efficacy of English and Multilingual LLMs for Swedish
Oskar Holmström | Jenny Kunz | Marco Kuhlmann
Proceedings of the Second Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2023)

Large language models (LLMs) have substantially improved natural language processing (NLP) performance, but training these models from scratch is resource-intensive and challenging for smaller languages. With this paper, we want to initiate a discussion on the necessity of language-specific pre-training of LLMs.We propose how the “one model-many models” conceptual framework for task transfer can be applied to language transfer and explore this approach by evaluating the performance of non-Swedish monolingual and multilingual models’ performance on tasks in Swedish.Our findings demonstrate that LLMs exposed to limited Swedish during training can be highly capable and transfer competencies from English off-the-shelf, including emergent abilities such as mathematical reasoning, while at the same time showing distinct culturally adapted behaviour. Our results suggest that there are resourceful alternatives to language-specific pre-training when creating useful LLMs for small languages.

2022

pdf
Human Ratings Do Not Reflect Downstream Utility: A Study of Free-Text Explanations for Model Predictions
Jenny Kunz | Martin Jirenius | Oskar Holmström | Marco Kuhlmann
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

Models able to generate free-text rationales that explain their output have been proposed as an important step towards interpretable NLP for “reasoning” tasks such as natural language inference and commonsense question answering. However, the relative merits of different architectures and types of rationales are not well understood and hard to measure. In this paper, we contribute two insights to this line of research: First, we find that models trained on gold explanations learn to rely on these but, in the case of the more challenging question answering data set we use, fail when given generated explanations at test time. However, additional fine-tuning on generated explanations teaches the model to distinguish between reliable and unreliable information in explanations. Second, we compare explanations by a generation-only model to those generated by a self-rationalizing model and find that, while the former score higher in terms of validity, factual correctness, and similarity to gold explanations, they are not more useful for downstream classification. We observe that the self-rationalizing model is prone to hallucination, which is punished by most metrics but may add useful context for the classification step.

pdf bib
On the Effects of Video Grounding on Language Models
Ehsan Doostmohammadi | Marco Kuhlmann
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models

Transformer-based models trained on text and vision modalities try to improve the performance on multimodal downstream tasks or tackle the problem Transformer-based models trained on text and vision modalities try to improve the performance on multimodal downstream tasks or tackle the problem of lack of grounding, e.g., addressing issues like models’ insufficient commonsense knowledge. While it is more straightforward to evaluate the effects of such models on multimodal tasks, such as visual question answering or image captioning, it is not as well-understood how these tasks affect the model itself, and its internal linguistic representations. In this work, we experiment with language models grounded in videos and measure the models’ performance on predicting masked words chosen based on their imageability. The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret.of lack of grounding, e.g., addressing issues like models’ insufficient commonsense knowledge. While it is more straightforward to evaluate the effects of such models on multimodal tasks, such as visual question answering or image captioning, it is not as well-understood how these tasks affect the model itself, and its internal linguistic representations. In this work, we experiment with language models grounded in videos and measure the models’ performance on predicting masked words chosen based on their imageability. The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret.

pdf
Tractable Parsing for CCGs of Bounded Degree
Lena Katharina Schiffer | Marco Kuhlmann | Giorgio Satta
Computational Linguistics, Volume 48, Issue 3 - September 2022

Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account. Refining this result, we show that the parsing complexity of CCG is exponential only in the maximum degree of composition. When that degree is fixed, parsing can be carried out in polynomial time. Our finding is interesting from a linguistic perspective because a bounded degree of composition has been suggested as a universal constraint on natural language grammar. Moreover, ours is the first complexity result for a version of CCG that includes substitution rules, which are used in practical grammars but have been ignored in theoretical work.

pdf
Where Does Linguistic Information Emerge in Neural Language Models? Measuring Gains and Contributions across Layers
Jenny Kunz | Marco Kuhlmann
Proceedings of the 29th International Conference on Computational Linguistics

Probing studies have extensively explored where in neural language models linguistic information is located. The standard approach to interpreting the results of a probing classifier is to focus on the layers whose representations give the highest performance on the probing task. We propose an alternative method that asks where the task-relevant information emerges in the model. Our framework consists of a family of metrics that explicitly model local information gain relative to the previous layer and each layer’s contribution to the model’s overall performance. We apply the new metrics to two pairs of syntactic probing tasks with different degrees of complexity and find that the metrics confirm the expected ordering only for one of the pairs. Our local metrics show a massive dominance of the first layers, indicating that the features that contribute the most to our probing tasks are not as high-level as global metrics suggest.

2021

pdf bib
Test Harder than You Train: Probing with Extrapolation Splits
Jenny Kunz | Marco Kuhlmann
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

Previous work on probing word representations for linguistic knowledge has focused on interpolation tasks. In this paper, we instead analyse probes in an extrapolation setting, where the inputs at test time are deliberately chosen to be ‘harder’ than the training examples. We argue that such an analysis can shed further light on the open question whether probes actually decode linguistic knowledge, or merely learn the diagnostic task from shallow features. To quantify the hardness of an example, we consider scoring functions based on linguistic, statistical, and learning-related criteria, all of which are applicable to a broad range of NLP tasks. We discuss the relative merits of these criteria in the context of two syntactic probing tasks, part-of-speech tagging and syntactic dependency labelling. From our theoretical and experimental analysis, we conclude that distance-based and hard statistical criteria show the clearest differences between interpolation and extrapolation settings, while at the same time being transparent, intuitive, and easy to control.

2020

pdf
Classifier Probes May Just Learn from Linear Context Features
Jenny Kunz | Marco Kuhlmann
Proceedings of the 28th International Conference on Computational Linguistics

Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo. While many authors are aware of the difficulty to distinguish between “extracting the linguistic structure encoded in the representations” and “learning the probing task,” the validity of probing methods calls for further research. Using a neighboring word identity prediction task, we show that the token embeddings learned by neural sentence encoders contain a significant amount of information about the exact linear context of the token, and hypothesize that, with such information, learning standard probing tasks may be feasible even without additional linguistic structure. We develop this hypothesis into a framework in which analysis efforts can be scrutinized and argue that, with current models and baselines, conclusions that representations contain linguistic structure are not well-founded. Current probing methodology, such as restricting the classifier’s expressiveness or using strong baselines, can help to better estimate the complexity of learning, but not build a foundation for speculations about the nature of the linguistic structure encoded in the learned representations.

pdf
End-to-End Negation Resolution as Graph Parsing
Robin Kurtz | Stephan Oepen | Marco Kuhlmann
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies

We present a neural end-to-end architecture for negation resolution based on a formulation of the task as a graph parsing problem. Our approach allows for the straightforward inclusion of many types of graph-structured features without the need for representation-specific heuristics. In our experiments, we specifically gauge the usefulness of syntactic information for negation resolution. Despite the conceptual simplicity of our architecture, we achieve state-of-the-art results on the Conan Doyle benchmark dataset, including a new top result for our best model.

2019

pdf bib
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning
Stephan Oepen | Omri Abend | Jan Hajic | Daniel Hershcovich | Marco Kuhlmann | Tim O’Gorman | Nianwen Xue
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning

pdf bib
MRP 2019: Cross-Framework Meaning Representation Parsing
Stephan Oepen | Omri Abend | Jan Hajic | Daniel Hershcovich | Marco Kuhlmann | Tim O’Gorman | Nianwen Xue | Jayeol Chun | Milan Straka | Zdenka Uresova
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning

The 2019 Shared Task at the Conference for Computational Language Learning (CoNLL) was devoted to Meaning Representation Parsing (MRP) across frameworks. Five distinct approaches to the representation of sentence meaning in the form of directed graph were represented in the training and evaluation data for the task, packaged in a uniform abstract graph representation and serialization. The task received submissions from eighteen teams, of which five do not participate in the official ranking because they arrived after the closing deadline, made use of additional training data, or involved one of the task co-organizers. All technical information regarding the task, including system submissions, official results, and links to supporting resources and software are available from the task web site at: http://mrp.nlpl.eu

pdf bib
Improving Semantic Dependency Parsing with Syntactic Features
Robin Kurtz | Daniel Roxbo | Marco Kuhlmann
Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing

We extend a state-of-the-art deep neural architecture for semantic dependency parsing with features defined over syntactic dependency trees. Our empirical results show that only gold-standard syntactic information leads to consistent improvements in semantic parsing accuracy, and that the magnitude of these improvements varies with the specific combination of the syntactic and the semantic representation used. In contrast, automatically predicted syntax does not seem to help semantic parsing. Our error analysis suggests that there is a significant overlap between syntactic and semantic representations.

2018

pdf
On the Complexity of CCG Parsing
Marco Kuhlmann | Giorgio Satta | Peter Jonsson
Computational Linguistics, Volume 44, Issue 3 - September 2018

We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker and Weir (1994) apart from weakly equivalent formalisms such as Tree Adjoining Grammar, for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.

2017

pdf bib
Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms
Marco Kuhlmann | Tatjana Scheffler
Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms

pdf
Exploiting Structure in Parsing to 1-Endpoint-Crossing Graphs
Robin Kurtz | Marco Kuhlmann
Proceedings of the 15th International Conference on Parsing Technologies

Deep dependency parsing can be cast as the search for maximum acyclic subgraphs in weighted digraphs. Because this search problem is intractable in the general case, we consider its restriction to the class of 1-endpoint-crossing (1ec) graphs, which has high coverage on standard data sets. Our main contribution is a characterization of 1ec graphs as a subclass of the graphs with pagenumber at most 3. Building on this we show how to extend an existing parsing algorithm for 1-endpoint-crossing trees to the full class. While the runtime complexity of the extended algorithm is polynomial in the length of the input sentence, it features a large constant, which poses a challenge for practical implementations.

2016

pdf
Squibs: Towards a Catalogue of Linguistic Graph Banks
Marco Kuhlmann | Stephan Oepen
Computational Linguistics, Volume 42, Issue 4 - December 2016

pdf
Towards Comparability of Linguistic Graph Banks for Semantic Parsing
Stephan Oepen | Marco Kuhlmann | Yusuke Miyao | Daniel Zeman | Silvie Cinková | Dan Flickinger | Jan Hajič | Angelina Ivanova | Zdeňka Urešová
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We announce a new language resource for research on semantic parsing, a large, carefully curated collection of semantic dependency graphs representing multiple linguistic traditions. This resource is called SDP~2016 and provides an update and extension to previous versions used as Semantic Dependency Parsing target representations in the 2014 and 2015 Semantic Evaluation Exercises. For a common core of English text, this third edition comprises semantic dependency graphs from four distinct frameworks, packaged in a unified abstract format and aligned at the sentence and token levels. SDP 2016 is the first general release of this resource and available for licensing from the Linguistic Data Consortium in May 2016. The data is accompanied by an open-source SDP utility toolkit and system results from previous contrastive parsing evaluations against these target representations.

2015

pdf bib
Lexicalization and Generative Power in CCG
Marco Kuhlmann | Alexander Koller | Giorgio Satta
Computational Linguistics, Volume 41, Issue 2 - June 2015

pdf
Parsing to Noncrossing Dependency Graphs
Marco Kuhlmann | Peter Jonsson
Transactions of the Association for Computational Linguistics, Volume 3

We study the generalization of maximum spanning tree dependency parsing to maximum acyclic subgraphs. Because the underlying optimization problem is intractable even under an arc-factored model, we consider the restriction to noncrossing dependency graphs. Our main contribution is a cubic-time exact inference algorithm for this class. We extend this algorithm into a practical parser and evaluate its performance on four linguistic data sets used in semantic dependency parsing. We also explore a generalization of our parsing framework to dependency graphs with pagenumber at most k and show that the resulting optimization problem is NP-hard for k ≥ 2.

pdf bib
Proceedings of the 14th Meeting on the Mathematics of Language (MoL 2015)
Marco Kuhlmann | Makoto Kanazawa | Gregory M. Kobele
Proceedings of the 14th Meeting on the Mathematics of Language (MoL 2015)

pdf
SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing
Stephan Oepen | Marco Kuhlmann | Yusuke Miyao | Daniel Zeman | Silvie Cinková | Dan Flickinger | Jan Hajič | Zdeňka Urešová
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

2014

pdf
SemEval 2014 Task 8: Broad-Coverage Semantic Dependency Parsing
Stephan Oepen | Marco Kuhlmann | Yusuke Miyao | Daniel Zeman | Dan Flickinger | Jan Hajič | Angelina Ivanova | Yi Zhang
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf
Linköping: Cubic-Time Graph Parsing with a Simple Scoring Scheme
Marco Kuhlmann
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf
A New Parsing Algorithm for Combinatory Categorial Grammar
Marco Kuhlmann | Giorgio Satta
Transactions of the Association for Computational Linguistics, Volume 2

We present a polynomial-time parsing algorithm for CCG, based on a new decomposition of derivations into small, shareable parts. Our algorithm has the same asymptotic complexity, O(n6), as a previous algorithm by Vijay-Shanker and Weir (1993), but is easier to understand, implement, and prove correct.

2013

pdf
Efficient Parsing for Head-Split Dependency Trees
Giorgio Satta | Marco Kuhlmann
Transactions of the Association for Computational Linguistics, Volume 1

Head splitting techniques have been successfully exploited to improve the asymptotic runtime of parsing algorithms for projective dependency trees, under the arc-factored model. In this article we extend these techniques to a class of non-projective dependency trees, called well-nested dependency trees with block-degree at most 2, which has been previously investigated in the literature. We define a structural property that allows head splitting for these trees, and present two algorithms that improve over the runtime of existing algorithms at no significant loss in coverage.

pdf
Mildly Non-Projective Dependency Grammar
Marco Kuhlmann
Computational Linguistics, Volume 39, Issue 2 - June 2013

pdf bib
Proceedings of the 13th Meeting on the Mathematics of Language (MoL 13)
András Kornai | Marco Kuhlmann
Proceedings of the 13th Meeting on the Mathematics of Language (MoL 13)

pdf
Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages
Djamé Seddah | Reut Tsarfaty | Sandra Kübler | Marie Candito | Jinho D. Choi | Richárd Farkas | Jennifer Foster | Iakes Goenaga | Koldo Gojenola Galletebeitia | Yoav Goldberg | Spence Green | Nizar Habash | Marco Kuhlmann | Wolfgang Maier | Joakim Nivre | Adam Przepiórkowski | Ryan Roth | Wolfgang Seeker | Yannick Versley | Veronika Vincze | Marcin Woliński | Alina Wróblewska | Eric Villemonte de la Clergerie
Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages

2012

pdf bib
Proceedings of the Workshop on Applications of Tree Automata Techniques in Natural Language Processing
Frank Drewes | Marco Kuhlmann
Proceedings of the Workshop on Applications of Tree Automata Techniques in Natural Language Processing

pdf
A Formal Model for Plausible Dependencies in Lexicalized Tree Adjoining Grammar
Laura Kallmeyer | Marco Kuhlmann
Proceedings of the 11th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+11)

pdf
Decomposing TAG Algorithms Using Simple Algebraizations
Alexander Koller | Marco Kuhlmann
Proceedings of the 11th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+11)

pdf
Tree-Adjoining Grammars Are Not Closed Under Strong Lexicalization
Marco Kuhlmann | Giorgio Satta
Computational Linguistics, Volume 38, Issue 3 - September 2012

2011

pdf
Dynamic Programming Algorithms for Transition-Based Dependency Parsers
Marco Kuhlmann | Carlos Gómez-Rodríguez | Giorgio Satta
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
A Generalized View on Parsing and Translation
Alexander Koller | Marco Kuhlmann
Proceedings of the 12th International Conference on Parsing Technologies

2010

pdf bib
Proceedings of the 2010 Workshop on Applications of Tree Automata in Natural Language Processing
Frank Drewes | Marco Kuhlmann
Proceedings of the 2010 Workshop on Applications of Tree Automata in Natural Language Processing

pdf
Efficient Parsing of Well-Nested Linear Context-Free Rewriting Systems
Carlos Gómez-Rodríguez | Marco Kuhlmann | Giorgio Satta
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
The Importance of Rule Restrictions in CCG
Marco Kuhlmann | Alexander Koller | Giorgio Satta
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

2009

pdf
Optimal Reduction of Rule Length in Linear Context-Free Rewriting Systems
Carlos Gómez-Rodríguez | Marco Kuhlmann | Giorgio Satta | David Weir
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
An Improved Oracle for Dependency Parsing with Online Reordering
Joakim Nivre | Marco Kuhlmann | Johan Hall
Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09)

pdf
Dependency Trees and the Strong Generative Capacity of CCG
Alexander Koller | Marco Kuhlmann
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

pdf
Treebank Grammar Techniques for Non-Projective Dependency Parsing
Marco Kuhlmann | Giorgio Satta
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

2007

pdf
Mildly Context-Sensitive Dependency Languages
Marco Kuhlmann | Mathias Möhl
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf
Extended Cross-Serial Dependencies in Tree Adjoining Grammars
Marco Kuhlmann | Mathias Möhl
Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms

pdf
Mildly Non-Projective Dependency Structures
Marco Kuhlmann | Joakim Nivre
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2004

pdf
A Relational Syntax-Semantics Interface Based on Dependency Grammar
Ralph Debusmann | Denys Duchier | Alexander Koller | Marco Kuhlmann | Gert Smolka | Stefan Thater
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf
TAG Parsing as Model Enumeration
Ralph Debusmann | Denys Duchier | Marco Kuhlmann | Stefan Thater
Proceedings of the 7th International Workshop on Tree Adjoining Grammar and Related Formalisms