This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we generate only three BibTeX files per volume, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
Procedural text understanding is a challenging language reasoning task that requires models to track entity states across the development of a narrative. We identify three core aspects required for modeling this task, namely the local and global view of the inputs, as well as the global view of outputs. Prior methods have considered a subset of these aspects, which leads to either low precision or low recall. In this paper, we propose a new model Coalescing Global and Local Information (CGLI), which builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output). Thus, CGLI simultaneously optimizes for both precision and recall. Moreover, we extend CGLI with additional output layers and integrate it into a story reasoning framework. Extensive experiments on a popular procedural text understanding dataset show that our model achieves state-of-the-art results, while experiments on a story reasoning benchmark show the positive impact of our model on downstream reasoning.
In this paper, we describe our systems for solving the two Doc2Dial shared task: knowledge identification and response generation. We proposed several pre-processing and post-processing methods, and we experimented with data augmentation by pre-training the models on other relevant datasets. Our best model for knowledge identification outperformed the baseline by 10.5+ f1-score on the test-dev split, and our best model for response generation outperformed the baseline by 11+ Sacrebleu score on the test-dev split.
Commonsense reasoning benchmarks have been largely solved by fine-tuning language models. The downside is that fine-tuning may cause models to overfit to task-specific data and thereby forget their knowledge gained during pre-training. Recent works only propose lightweight model updates as models may already possess useful knowledge from past experience, but a challenge remains in understanding what parts and to what extent models should be refined for a given task. In this paper, we investigate what models learn from commonsense reasoning datasets. We measure the impact of three different adaptation methods on the generalization and accuracy of models. Our experiments with two models show that fine-tuning performs best, by learning both the content and the structure of the task, but suffers from overfitting and limited generalization to novel answers. We observe that alternative adaptation methods like prefix-tuning have comparable accuracy, but generalize better to unseen answers and are more robust to adversarial splits.
Non-extractive commonsense QA remains a challenging AI task, as it requires systems to reason about, synthesize, and gather disparate pieces of information, in order to generate responses to queries. Recent approaches on such tasks show increased performance, only when models are either pre-trained with additional information or when domain-specific heuristics are used, without any special consideration regarding the knowledge resource type. In this paper, we perform a survey of recent commonsense QA methods and we provide a systematic analysis of popular knowledge resources and knowledge-integration methods, across benchmarks from multiple commonsense datasets. Our results and analysis show that attention-based injection seems to be a preferable choice for knowledge integration and that the degree of domain overlap, between knowledge bases and datasets, plays a crucial role in determining model success.
This paper focuses on the improvement of the conceptual structure of FrameNet (FN) for the sake of applying this resource to knowledge-intensive NLP tasks requiring reasoning, such as question answering, information extraction etc. In this paper we show that in addition to coverage incompleteness, the current version of FN suffers from conceptual inconsistency and lacks axiomatization which can prevent appropriate inferences. For the sake of discovering and classifying conceptual problems in FN we investigate the FrameNet-Annotated corpus for Textual Entailment. Then we propose a methodology for improving the conceptual organization of FN. The main issue we focus on in our study is enriching, axiomatizing and cleaning up frame relations. Our methodology includes a data-driven analysis of frames resulting in discovering new frame relations and an ontological analysis of frames and frame relations resulting in axiomatizing relations and formulating constraints on them. In this paper, frames and frame relations are analyzed in terms of the DOLCE formal ontology. Additionally, we have described a case study aiming at demonstrating how the proposed methodology works in practice as well as investigating the impact of the restructured and axiomatized frame relations on recognizing textual entailment.
This paper introduces the general features of Senso Comune, an open knowledge base for the Italian language, focusing on the interplay of lexical and ontological knowledge, and outlining our approach to conceptual knowledge elicitation. Senso Comune consists of a machine-readable lexicon constrained by an ontological infrastructure. The idea at the basis of Senso Comune is that natural languages exist in use, and they belong to their users. In the line of Saussure's linguistics, natural languages are seen as a social product and their main strength relies on the users consensus. At the same time, language has specific goals: i.e. referring to entities that belong to the users world (be it physical or not) and that are made up in social environments where expressions are produced and understood. This usage leverages the creativity of those who produce words and try to understand them. This is the reason why ontology, i.e. a shared conceptualization of the world, can be regarded to as the soil on which the speakers' consensus may be rooted. Some final remarks concerning future work and applications are also given.
In this paper we claim that an integration of FrameNet and WordNet will improve interoperability, user-friendliness and usability of both lexical resources. If the former provides a sophisticated representational structure compared to a narrow lexical coverage, the latter - on the other side - supplies a dense network of word senses and semantic relations although not supporting advanced accessibility (i.e., via frames). According to the integration perspective we present in the paper, we introduce LexiPass methodology, which combines Burckardts tool WordNet Detour of FrameNet with basic statistical analysis, enabling frame-guided search and extraction of domain synsets from WordNet.