Mihai Surdeanu


2021

pdf bib
How May I Help You? Using Neural Text Simplification to Improve Downstream NLP Tasks
Hoang Van | Zheng Tang | Mihai Surdeanu
Findings of the Association for Computational Linguistics: EMNLP 2021

The general goal of text simplification (TS) is to reduce text complexity for human consumption. In this paper, we investigate another potential use of neural TS: assisting machines performing natural language processing (NLP) tasks. We evaluate the use of neural TS in two ways: simplifying input texts at prediction time and augmenting data to provide machines with additional information during training. We demonstrate that the latter scenario provides positive effects on machine performance on two separate datasets. In particular, the latter use of TS improves the performances of LSTM (1.82–1.98%) and SpanBERT (0.7–1.3%) extractors on TACRED, a complex, large-scale, real-world relation extraction task. Further, the same setting yields improvements of up to 0.65% matched and 0.62% mismatched accuracies for a BERT text classifier on MNLI, a practical natural language inference dataset.

pdf bib
Explainable Multi-hop Verbal Reasoning Through Internal Monologue
Zhengzhong Liang | Steven Bethard | Mihai Surdeanu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Many state-of-the-art (SOTA) language models have achieved high accuracy on several multi-hop reasoning problems. However, these approaches tend to not be interpretable because they do not make the intermediate reasoning steps explicit. Moreover, models trained on simpler tasks tend to fail when directly tested on more complex problems. We propose the Explainable multi-hop Verbal Reasoner (EVR) to solve these limitations by (a) decomposing multi-hop reasoning problems into several simple ones, and (b) using natural language to guide the intermediate reasoning hops. We implement EVR by extending the classic reasoning paradigm General Problem Solver (GPS) with a SOTA generative language model to generate subgoals and perform inference in natural language at each reasoning step. Evaluation of EVR on the RuleTaker synthetic question answering (QA) dataset shows that EVR achieves SOTA performance while being able to generate all reasoning steps in natural language. Furthermore, EVR generalizes better than other strong methods when trained on simpler tasks or less training data (up to 35.7% and 7.7% absolute improvement respectively).

pdf bib
Data and Model Distillation as a Solution for Domain-transferable Fact Verification
Mitch Paul Mithun | Sandeep Suntwal | Mihai Surdeanu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

While neural networks produce state-of-the-art performance in several NLP tasks, they generally depend heavily on lexicalized information, which transfer poorly between domains. We present a combination of two strategies to mitigate this dependence on lexicalized information in fact verification tasks. We present a data distillation technique for delexicalization, which we then combine with a model distillation method to prevent aggressive data distillation. We show that by using our solution, not only does the performance of an existing state-of-the-art model remain at par with that of the model trained on a fully lexicalized data, but it also performs better than it when tested out of domain. We show that the technique we present encourages models to extract transferable facts from a given fact verification dataset.

pdf bib
If You Want to Go Far Go Together: Unsupervised Joint Candidate Evidence Retrieval for Multi-hop Question Answering
Vikas Yadav | Steven Bethard | Mihai Surdeanu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Multi-hop reasoning requires aggregation and inference from multiple facts. To retrieve such facts, we propose a simple approach that retrieves and reranks set of evidence facts jointly. Our approach first generates unsupervised clusters of sentences as candidate evidence by accounting links between sentences and coverage with the given query. Then, a RoBERTa-based reranker is trained to bring the most representative evidence cluster to the top. We specifically emphasize on the importance of retrieving evidence jointly by showing several comparative analyses to other methods that retrieve and rerank evidence sentences individually. First, we introduce several attention- and embedding-based analyses, which indicate that jointly retrieving and reranking approaches can learn compositional knowledge required for multi-hop reasoning. Second, our experiments show that jointly retrieving candidate evidence leads to substantially higher evidence retrieval performance when fed to the same supervised reranker. In particular, our joint retrieval and then reranking approach achieves new state-of-the-art evidence retrieval performance on two multi-hop question answering (QA) datasets: 30.5 Recall@2 on QASC, and 67.6% F1 on MultiRC. When the evidence text from our joint retrieval approach is fed to a RoBERTa-based answer selection classifier, we achieve new state-of-the-art QA performance on MultiRC and second best result on QASC.

pdf bib
Students Who Study Together Learn Better: On the Importance of Collective Knowledge Distillation for Domain Transfer in Fact Verification
Mitch Paul Mithun | Sandeep Suntwal | Mihai Surdeanu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

While neural networks produce state-of-the- art performance in several NLP tasks, they generally depend heavily on lexicalized information, which transfer poorly between domains. Previous works have proposed delexicalization as a form of knowledge distillation to reduce the dependency on such lexical artifacts. However, a critical unsolved issue that remains is how much delexicalization to apply: a little helps reduce overfitting, but too much discards useful information. We propose Group Learning, a knowledge and model distillation approach for fact verification in which multiple student models have access to different delexicalized views of the data, but are encouraged to learn from each other through pair-wise consistency losses. In several cross-domain experiments between the FEVER and FNC fact verification datasets, we show that our approach learns the best delexicalization strategy for the given training dataset, and outperforms state-of-the-art classifiers that rely on the original data.

pdf bib
Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder
Zheng Tang | Mihai Surdeanu
Proceedings of the First Workshop on Trustworthy Natural Language Processing

We introduce a method that transforms a rule-based relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.

2020

pdf bib
An Analysis of Capsule Networks for Part of Speech Tagging in High- and Low-resource Scenarios
Andrew Zupon | Faiz Rafique | Mihai Surdeanu
Proceedings of the First Workshop on Insights from Negative Results in NLP

Neural networks are a common tool in NLP, but it is not always clear which architecture to use for a given task. Different tasks, different languages, and different training conditions can all affect how a neural network will perform. Capsule Networks (CapsNets) are a relatively new architecture in NLP. Due to their novelty, CapsNets are being used more and more in NLP tasks. However, their usefulness is still mostly untested.In this paper, we compare three neural network architectures—LSTM, CNN, and CapsNet—on a part of speech tagging task. We compare these architectures in both high- and low-resource training conditions and find that no architecture consistently performs the best. Our analysis shows that our CapsNet performs nearly as well as a more complex LSTM under certain training conditions, but not others, and that our CapsNet almost always outperforms our CNN. We also find that our CapsNet implementation shows faster prediction times than the LSTM for Scottish Gaelic but not for Spanish, highlighting the effect that the choice of languages can have on the models.

pdf bib
Do Transformers Dream of Inference, or Can Pretrained Generative Models Learn Implicit Inferential Rules?
Zhengzhong Liang | Mihai Surdeanu
Proceedings of the First Workshop on Insights from Negative Results in NLP

Large pretrained language models (LM) have been used successfully for multi-hop question answering. However, most of these directions are not interpretable, as they do not make the inference hops necessary to explain a candidate answer explicitly. In this work, we investigate the capability of a state-of-the-art transformer LM to generate explicit inference hops, i.e., to infer a new statement necessary to answer a question given some premise input statements. Our analysis shows that such LMs can generate new statements for some simple inference types, but performance remains poor for complex, real-world inference types such as those that require monotonicity, composition, and commonsense knowledge.

pdf bib
An Unsupervised Method for Learning Representations of Multi-word Expressions for Semantic Classification
Robert Vacareanu | Marco A. Valenzuela-Escárcega | Rebecca Sharp | Mihai Surdeanu
Proceedings of the 28th International Conference on Computational Linguistics

This paper explores an unsupervised approach to learning a compositional representation function for multi-word expressions (MWEs), and evaluates it on the Tratz dataset, which associates two-word expressions with the semantic relation between the compound constituents (e.g. the label employer is associated with the noun compound government agency) (Tratz, 2011). The composition function is based on recurrent neural networks, and is trained using the Skip-Gram objective to predict the words in the context of MWEs. Thus our approach can naturally leverage large unlabeled text sources. Further, our method can make use of provided MWEs when available, but can also function as a completely unsupervised algorithm, using MWE boundaries predicted by a single, domain-agnostic part-of-speech pattern. With pre-defined MWE boundaries, our method outperforms the previous state-of-the-art performance on the coarse-grained evaluation of the Tratz dataset (Tratz, 2011), with an F1 score of 50.4%. The unsupervised version of our method approaches the performance of the supervised one, and even outperforms it in some configurations.

pdf bib
Parsing as Tagging
Robert Vacareanu | George Caique Gouveia Barbosa | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of the 12th Language Resources and Evaluation Conference

We propose a simple yet accurate method for dependency parsing that treats parsing as tagging (PaT). That is, our approach addresses the parsing of dependency trees with a sequence model implemented with a bidirectional LSTM over BERT embeddings, where the “tag” to be predicted at each token position is the relative position of the corresponding head. For example, for the sentence John eats cake, the tag to be predicted for the token cake is -1 because its head (eats) occurs one token to the left. Despite its simplicity, our approach performs well. For example, our approach outperforms the state-of-the-art method of (Fernández-González and Gómez-Rodríguez, 2019) on Universal Dependencies (UD) by 1.76% unlabeled attachment score (UAS) for English, 1.98% UAS for French, and 1.16% UAS for German. On average, on 12 UD languages, our method with minimal tuning performs comparably with this state-of-the-art approach: better by 0.11% UAS, and worse by 0.58% LAS.

pdf bib
Towards the Necessity for Debiasing Natural Language Inference Datasets
Mithun Paul Panenghat | Sandeep Suntwal | Faiz Rafique | Rebecca Sharp | Mihai Surdeanu
Proceedings of the 12th Language Resources and Evaluation Conference

Modeling natural language inference is a challenging task. With large annotated data sets available it has now become feasible to train complex neural network based inference methods which achieve state of the art performance. However, it has been shown that these models also learn from the subtle biases inherent in these datasets (CITATION). In this work we explore two techniques for delexicalization that modify the datasets in such a way that we can control the importance that neural-network based methods place on lexical entities. We demonstrate that the proposed methods not only maintain the performance in-domain but also improve performance in some out-of-domain settings. For example, when using the delexicalized version of the FEVER dataset, the in-domain performance of a state of the art neural network method dropped only by 1.12% while its out-of-domain performance on the FNC dataset improved by 4.63%. We release the delexicalized versions of three common datasets used in natural language inference. These datasets are delexicalized using two methods: one which replaces the lexical entities in an overlap-aware manner, and a second, which additionally incorporates semantic lifting of nouns and verbs to their WordNet hypernym synsets

pdf bib
Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
Vikas Yadav | Steven Bethard | Mihai Surdeanu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the QA method. We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas: (a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings, (b) an iterative process that reformulates queries focusing on terms that are not covered by existing justifications, which (c) stops when the terms in the given question and candidate answers are covered by the retrieved justifications. Despite its simplicity, our approach outperforms all the previous methods (including supervised methods) on the evidence selection task on two datasets: MultiRC and QASC. When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance on these two datasets.

pdf bib
Exploring Interpretability in Event Extraction: Multitask Learning of a Neural Event Classifier and an Explanation Decoder
Zheng Tang | Gus Hahn-Powell | Mihai Surdeanu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

We propose an interpretable approach for event extraction that mitigates the tension between generalization and interpretability by jointly training for the two goals. Our approach uses an encoder-decoder architecture, which jointly trains a classifier for event extraction, and a rule decoder that generates syntactico-semantic rules that explain the decisions of the event classifier. We evaluate the proposed approach on three biomedical events and show that the decoder generates interpretable rules that serve as accurate explanations for the event classifier’s decisions, and, importantly, that the joint training generally improves the performance of the event classifier. Lastly, we show that our approach can be used for semi-supervised learning, and that its performance improves when trained on automatically-labeled data generated by a rule-based system.

2019

pdf bib
Lightly-supervised Representation Learning with Global Interpretability
Andrew Zupon | Maria Alexeeva | Marco Valenzuela-Escárcega | Ajay Nagesh | Mihai Surdeanu
Proceedings of the Third Workshop on Structured Prediction for NLP

We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.

pdf bib
Semi-Supervised Teacher-Student Architecture for Relation Extraction
Fan Luo | Ajay Nagesh | Rebecca Sharp | Mihai Surdeanu
Proceedings of the Third Workshop on Structured Prediction for NLP

Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used). On the other hand, semi-supervised learning (SSL) is a cost-efficient solution to combat lack of training data. In this paper, we adapt Mean Teacher (Tarvainen and Valpola, 2017), a denoising SSL framework to extract semantic relations between pairs of entities. We explore the sweet spot of amount of supervision required for good performance on this binary relation extraction task. Additionally, different syntax representations are incorporated into our models to enhance the learned representation of sentences. We evaluate our approach on the Google-IISc Distant Supervision (GDS) dataset, which removes test data noise present in all previous distance supervision datasets, which makes it a reliable evaluation benchmark (Jat et al., 2017). Our results show that the SSL Mean Teacher approach nears the performance of fully-supervised approaches even with only 10% of the labeled corpus. Further, the syntax-aware model outperforms other syntax-free approaches across all levels of supervision.

pdf bib
Understanding the Polarity of Events in the Biomedical Literature: Deep Learning vs. Linguistically-informed Methods
Enrique Noriega-Atala | Zhengzhong Liang | John Bachman | Clayton Morrison | Mihai Surdeanu
Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publications

An important task in the machine reading of biochemical events expressed in biomedical texts is correctly reading the polarity, i.e., attributing whether the biochemical event is a promotion or an inhibition. Here we present a novel dataset for studying polarity attribution accuracy. We use this dataset to train and evaluate several deep learning models for polarity identification, and compare these to a linguistically-informed model. The best performing deep learning architecture achieves 0.968 average F1 performance in a five-fold cross-validation study, a considerable improvement over the linguistically informed model average F1 of 0.862.

pdf bib
Alignment over Heterogeneous Embeddings for Question Answering
Vikas Yadav | Steven Bethard | Mihai Surdeanu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We propose a simple, fast, and mostly-unsupervised approach for non-factoid question answering (QA) called Alignment over Heterogeneous Embeddings (AHE). AHE simply aligns each word in the question and candidate answer with the most similar word in the retrieved supporting paragraph, and weighs each alignment score with the inverse document frequency of the corresponding question/answer term. AHE’s similarity function operates over embeddings that model the underlying text at different levels of abstraction: character (FLAIR), word (BERT and GloVe), and sentence (InferSent), where the latter is the only supervised component in the proposed approach. Despite its simplicity and lack of supervision, AHE obtains a new state-of-the-art performance on the “Easy” partition of the AI2 Reasoning Challenge (ARC) dataset (64.6% accuracy), top-two performance on the “Challenge” partition of ARC (34.1%), and top-three performance on the WikiQA dataset (74.08% MRR), outperforming many other complex, supervised approaches. Our error analysis indicates that alignments over character, word, and sentence embeddings capture substantially different semantic information. We exploit this with a simple meta-classifier that learns how much to trust the predictions over each representation, which further improves the performance of unsupervised AHE.

pdf bib
Enabling Search and Collaborative Assembly of Causal Interactions Extracted from Multilingual and Multi-domain Free Text
George C. G. Barbosa | Zechy Wong | Gus Hahn-Powell | Dane Bell | Rebecca Sharp | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)

Many of the most pressing current research problems (e.g., public health, food security, or climate change) require multi-disciplinary collaborations. In order to facilitate this process, we propose a system that incorporates multi-domain extractions of causal interactions into a single searchable knowledge graph. Our system enables users to search iteratively over direct and indirect connections in this knowledge graph, and collaboratively build causal models in real time. To enable the aggregation of causal information from multiple languages, we extend an open-domain machine reader to Portuguese. The new Portuguese reader extracts over 600 thousand causal statements from 120 thousand Portuguese publications with a precision of 62%, which demonstrates the value of mining multilingual scientific information.

pdf bib
Eidos, INDRA, & Delphi: From Free Text to Executable Causal Models
Rebecca Sharp | Adarsh Pyarelal | Benjamin Gyori | Keith Alcock | Egoitz Laparra | Marco A. Valenzuela-Escárcega | Ajay Nagesh | Vikas Yadav | John Bachman | Zheng Tang | Heather Lent | Fan Luo | Mithun Paul | Steven Bethard | Kobus Barnard | Clayton Morrison | Mihai Surdeanu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)

Building causal models of complicated phenomena such as food insecurity is currently a slow and labor-intensive manual process. In this paper, we introduce an approach that builds executable probabilistic models from raw, free text. The proposed approach is implemented through three systems: Eidos, INDRA, and Delphi. Eidos is an open-domain machine reading system designed to extract causal relations from natural language. It is rule-based, allowing for rapid domain transfer, customizability, and interpretability. INDRA aggregates multiple sources of causal information and performs assembly to create a coherent knowledge base and assess its reliability. This assembled knowledge serves as the starting point for modeling. Delphi is a modeling framework that assembles quantified causal fragments and their contexts into executable probabilistic models that respect the semantics of the original text, and can be used to support decision making.

pdf bib
Exploration of Noise Strategies in Semi-supervised Named Entity Classification
Pooja Lakshmi Narayan | Ajay Nagesh | Mihai Surdeanu
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

Noise is inherent in real world datasets and modeling noise is critical during training as it is effective in regularization. Recently, novel semi-supervised deep learning techniques have demonstrated tremendous potential when learning with very limited labeled training data in image processing tasks. A critical aspect of these semi-supervised learning techniques is augmenting the input or the network with noise to be able to learn robust models. While modeling noise is relatively straightforward in continuous domains such as image classification, it is not immediately apparent how noise can be modeled in discrete domains such as language. Our work aims to address this gap by exploring different noise strategies for the semi-supervised named entity classification task, including statistical methods such as adding Gaussian noise to input embeddings, and linguistically-inspired ones such as dropping words and replacing words with their synonyms. We compare their performance on two benchmark datasets (OntoNotes and CoNLL) for named entity classification. Our results indicate that noise strategies that are linguistically informed perform at least as well as statistical approaches, while being simpler and requiring minimal tuning.

pdf bib
University of Arizona at SemEval-2019 Task 12: Deep-Affix Named Entity Recognition of Geolocation Entities
Vikas Yadav | Egoitz Laparra | Ti-Tai Wang | Mihai Surdeanu | Steven Bethard
Proceedings of the 13th International Workshop on Semantic Evaluation

We present the Named Entity Recognition (NER) and disambiguation model used by the University of Arizona team (UArizona) for the SemEval 2019 task 12. We achieved fourth place on tasks 1 and 3. We implemented a deep-affix based LSTM-CRF NER model for task 1, which utilizes only character, word, pre- fix and suffix information for the identification of geolocation entities. Despite using just the training data provided by task organizers and not using any lexicon features, we achieved 78.85% strict micro F-score on task 1. We used the unsupervised population heuristics for task 3 and achieved 52.99% strict micro-F1 score in this task.

pdf bib
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
Vikas Yadav | Steven Bethard | Mihai Surdeanu
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose an unsupervised strategy for the selection of justification sentences for multi-hop question answering (QA) that (a) maximizes the relevance of the selected sentences, (b) minimizes the overlap between the selected facts, and (c) maximizes the coverage of both question and answer. This unsupervised sentence selection can be coupled with any supervised QA model. We show that the sentences selected by our method improve the performance of a state-of-the-art supervised QA model on two multi-hop QA datasets: AI2’s Reasoning Challenge (ARC) and Multi-Sentence Reading Comprehension (MultiRC). We obtain new state-of-the-art performance on both datasets among systems that do not use external resources for training the QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1% EM0 on MultiRC. Our justification sentences have higher quality than the justifications selected by a strong information retrieval baseline, e.g., by 5.4% F1 in MultiRC. We also show that our unsupervised selection of justification sentences is more stable across domains than a state-of-the-art supervised sentence selection method.

pdf bib
On the Importance of Delexicalization for Fact Verification
Sandeep Suntwal | Mithun Paul | Rebecca Sharp | Mihai Surdeanu
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

While neural networks produce state-of-the-art performance in many NLP tasks, they generally learn from lexical information, which may transfer poorly between domains. Here, we investigate the importance that a model assigns to various aspects of data while learning and making predictions, specifically, in a recognizing textual entailment (RTE) task. By inspecting the attention weights assigned by the model, we confirm that most of the weights are assigned to noun phrases. To mitigate this dependence on lexicalized information, we experiment with two strategies of masking. First, we replace named entities with their corresponding semantic tags along with a unique identifier to indicate lexical overlap between claim and evidence. Second, we similarly replace other word classes in the sentence (nouns, verbs, adjectives, and adverbs) with their super sense tags (Ciaramita and Johnson, 2003). Our results show that, while performance on the in-domain dataset remains on par with that of the model trained on fully lexicalized data, it improves considerably when tested out of domain. For example, the performance of a state-of-the-art RTE model trained on the masked Fake News Challenge (Pomerleau and Rao, 2017) data and evaluated on Fact Extraction and Verification (Thorne et al., 2018) data improved by over 10% in accuracy score compared to the fully lexicalized model.

pdf bib
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)
Dmitry Ustalov | Swapna Somasundaran | Peter Jansen | Goran Glavaš | Martin Riedl | Mihai Surdeanu | Michalis Vazirgiannis
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)

pdf bib
What does the language of foods say about us?
Hoang Van | Ahmad Musa | Hang Chen | Stephen Kobourov | Mihai Surdeanu
Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)

In this work we investigate the signal contained in the language of food on social media. We experiment with a dataset of 24 million food-related tweets, and make several observations. First,thelanguageoffoodhaspredictive power. We are able to predict if states in the United States (US) are above the medianratesfortype2diabetesmellitus(T2DM), income, poverty, and education – outperforming previous work by 4–18%. Second, we investigate the effect of socioeconomic factors (income, poverty, and education) on predicting state-level T2DM rates. Socioeconomic factors do improve T2DM prediction, with the greatestimprovementcomingfrompovertyinformation(6%),but,importantly,thelanguage of food adds distinct information that is not captured by socioeconomics. Third, we analyze how the language of food has changed over a five-year period (2013 – 2017), which is indicative of the shift in eating habits in the US during that period. We find several food trends, and that the language of food is used differently by different groups such as differentgenders. Last,weprovideanonlinevisualization tool for real-time queries and semantic analysis.

2018

pdf bib
Visual Supervision in Bootstrapped Information Extraction
Matthew Berger | Ajay Nagesh | Joshua Levine | Mihai Surdeanu | Helen Zhang
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We challenge a common assumption in active learning, that a list-based interface populated by informative samples provides for efficient and effective data annotation. We show how a 2D scatterplot populated with diverse and representative samples can yield improved models given the same time budget. We consider this for bootstrapping-based information extraction, in particular named entity classification, where human and machine jointly label data. To enable effective data annotation in a scatterplot, we have developed an embedding-based bootstrapping model that learns the distributional similarity of entities through the patterns that match them in a large data corpus, while being discriminative with respect to human-labeled and machine-promoted entities. We conducted a user study to assess the effectiveness of these different interfaces, and analyze bootstrapping performance in terms of human labeling accuracy, label quantity, and labeling consensus across multiple users. Our results suggest that supervision acquired from the scatterplot interface, despite being noisier, yields improvements in classification performance compared with the list interface, due to a larger quantity of supervision acquired.

pdf bib
An Exploration of Three Lightly-supervised Representation Learning Approaches for Named Entity Classification
Ajay Nagesh | Mihai Surdeanu
Proceedings of the 27th International Conference on Computational Linguistics

Several semi-supervised representation learning methods have been proposed recently that mitigate the drawbacks of traditional bootstrapping: they reduce the amount of semantic drift introduced by iterative approaches through one-shot learning; others address the sparsity of data through the learning of custom, dense representation for the information modeled. In this work, we are the first to adapt three of these methods, most of which have been originally proposed for image processing, to an information extraction task, specifically, named entity classification. Further, we perform a rigorous comparative analysis on two distinct datasets. Our analysis yields several important observations. First, all representation learning methods outperform state-of-the-art semi-supervised methods that do not rely on representation learning. To the best of our knowledge, we report the latest state-of-the-art results on the semi-supervised named entity classification task. Second, one-shot learning methods clearly outperform iterative representation learning approaches. Lastly, one of the best performers relies on the mean teacher framework (Tarvainen and Valpola, 2017), a simple teacher/student approach that is independent of the underlying task-specific model.

pdf bib
Keep Your Bearings: Lightly-Supervised Information Extraction with Ladder Networks That Avoids Semantic Drift
Ajay Nagesh | Mihai Surdeanu
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

We propose a novel approach to semi-supervised learning for information extraction that uses ladder networks (Rasmus et al., 2015). In particular, we focus on the task of named entity classification, defined as identifying the correct label (e.g., person or organization name) of an entity mention in a given context. Our approach is simple, efficient and has the benefit of being robust to semantic drift, a dominant problem in most semi-supervised learning systems. We empirically demonstrate the superior performance of our system compared to the state-of-the-art on two standard datasets for named entity classification. We obtain between 62% and 200% improvement over the state-of-art baseline on these two datasets.

pdf bib
Scientific Discovery as Link Prediction in Influence and Citation Graphs
Fan Luo | Marco A. Valenzuela-Escárcega | Gus Hahn-Powell | Mihai Surdeanu
Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12)

We introduce a machine learning approach for the identification of “white spaces” in scientific knowledge. Our approach addresses this task as link prediction over a graph that contains over 2M influence statements such as “CTCF activates FOXA1”, which were automatically extracted using open-domain machine reading. We model this prediction task using graph-based features extracted from the above influence graph, as well as from a citation graph that captures scientific communities. We evaluated the proposed approach through backtesting. Although the data is heavily unbalanced (50 times more negative examples than positives), our approach predicts which influence links will be discovered in the “near future” with a F1 score of 27 points, and a mean average precision of 68%.

pdf bib
A mostly unlexicalized model for recognizing textual entailment
Mithun Paul | Rebecca Sharp | Mihai Surdeanu
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)

Many approaches to automatically recognizing entailment relations have employed classifiers over hand engineered lexicalized features, or deep learning models that implicitly capture lexicalization through word embeddings. This reliance on lexicalization may complicate the adaptation of these tools between domains. For example, such a system trained in the news domain may learn that a sentence like “Palestinians recognize Texas as part of Mexico” tends to be unsupported, but this fact (and its corresponding lexicalized cues) have no value in, say, a scientific domain. To mitigate this dependence on lexicalized information, in this paper we propose a model that reads two sentences, from any given domain, to determine entailment without using lexicalized features. Instead our model relies on features that are either unlexicalized or are domain independent such as proportion of negated verbs, antonyms, or noun overlap. In its current implementation, this model does not perform well on the FEVER dataset, due to two reasons. First, for the information retrieval portion of the task we used the baseline system provided, since this was not the aim of our project. Second, this is work in progress and we still are in the process of identifying more features and gradually increasing the accuracy of our model. In the end, we hope to build a generic end-to-end classifier, which can be used in a domain outside the one in which it was trained, with no or minimal re-training.

pdf bib
Detecting Diabetes Risk from Social Media Activity
Dane Bell | Egoitz Laparra | Aditya Kousik | Terron Ishihara | Mihai Surdeanu | Stephen Kobourov
Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis

This work explores the detection of individuals’ risk of type 2 diabetes mellitus (T2DM) directly from their social media (Twitter) activity. Our approach extends a deep learning architecture with several contributions: following previous observations that language use differs by gender, it captures and uses gender information through domain adaptation; it captures recency of posts under the hypothesis that more recent posts are more representative of an individual’s current risk status; and, lastly, it demonstrates that in this scenario where activity factors are sparsely represented in the data, a bag-of-word neural network model using custom dictionaries of food and activity words performs better than other neural sequence models. Our best model, which incorporates all these contributions, achieves a risk-detection F1 of 41.9, considerably higher than the baseline rate (36.9).

pdf bib
Bootstrapping Polar-Opposite Emotion Dimensions from Online Reviews
Luwen Huangfu | Mihai Surdeanu
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Text Annotation Graphs: Annotating Complex Natural Language Phenomena
Angus Forbes | Kristine Lee | Gus Hahn-Powell | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Grounding Gradable Adjectives through Crowdsourcing
Rebecca Sharp | Mithun Paul | Ajay Nagesh | Dane Bell | Mihai Surdeanu
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2017

pdf bib
Framing QA as Building and Ranking Intersentence Answer Justifications
Peter Jansen | Rebecca Sharp | Mihai Surdeanu | Peter Clark
Computational Linguistics, Volume 43, Issue 2 - June 2017

We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information needed in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network approaches. Our best configuration answers 44% of the questions correctly, where the top justifications for 57% of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong baseline, and show that information aggregation is key to addressing the information need in complex questions.

pdf bib
Learning what to read: Focused machine reading
Enrique Noriega-Atala | Marco A. Valenzuela-Escárcega | Clayton Morrison | Mihai Surdeanu
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Recent efforts in bioinformatics have achieved tremendous progress in the machine reading of biomedical literature, and the assembly of the extracted biochemical interactions into large-scale models such as protein signaling pathways. However, batch machine reading of literature at today’s scale (PubMed alone indexes over 1 million papers per year) is unfeasible due to both cost and processing overhead. In this work, we introduce a focused reading approach to guide the machine reading of biomedical literature towards what literature should be read to answer a biomedical query as efficiently as possible. We introduce a family of algorithms for focused reading, including an intuitive, strong baseline, and a second approach which uses a reinforcement learning (RL) framework that learns when to explore (widen the search) or exploit (narrow it). We demonstrate that the RL approach is capable of answering more queries than the baseline, while being more efficient, i.e., reading fewer documents.

pdf bib
Swanson linking revisited: Accelerating literature-based discovery across domains using a conceptual influence graph
Gus Hahn-Powell | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of ACL 2017, System Demonstrations

pdf bib
Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification
Rebecca Sharp | Mihai Surdeanu | Peter Jansen | Marco A. Valenzuela-Escárcega | Peter Clark | Michael Hammond
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)

For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9% rated highly relevant) and answer selection (+6% P@1).

2016

pdf bib
SnapToGrid: From Statistical to Interpretable Models for Biomedical Information Extraction
Marco A. Valenzuela-Escárcega | Gus Hahn-Powell | Dane Bell | Mihai Surdeanu
Proceedings of the 15th Workshop on Biomedical Natural Language Processing

pdf bib
This before That: Causal Precedence in the Biomedical Domain
Gus Hahn-Powell | Dane Bell | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of the 15th Workshop on Biomedical Natural Language Processing

pdf bib
What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
Peter Jansen | Niranjan Balasubramanian | Mihai Surdeanu | Peter Clark
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference.

pdf bib
Creating Causal Embeddings for Question Answering with Minimal Supervision
Rebecca Sharp | Mihai Surdeanu | Peter Jansen | Peter Clark | Michael Hammond
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Sieve-based Coreference Resolution in the Biomedical Domain
Dane Bell | Gus Hahn-Powell | Marco A. Valenzuela-Escárcega | Mihai Surdeanu
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We describe challenges and advantages unique to coreference resolution in the biomedical domain, and a sieve-based architecture that leverages domain knowledge for both entity and event coreference resolution. Domain-general coreference resolution algorithms perform poorly on biomedical documents, because the cues they rely on such as gender are largely absent in this domain, and because they do not encode domain-specific knowledge such as the number and type of participants required in chemical reactions. Moreover, it is difficult to directly encode this knowledge into most coreference resolution algorithms because they are not rule-based. Our rule-based architecture uses sequentially applied hand-designed “sieves”, with the output of each sieve informing and constraining subsequent sieves. This architecture provides a 3.2% increase in throughput to our Reach event extraction system with precision parallel to that of the stricter system that relies solely on syntactic patterns for extraction.

pdf bib
Odin’s Runes: A Rule Language for Information Extraction
Marco A. Valenzuela-Escárcega | Gus Hahn-Powell | Mihai Surdeanu
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Odin is an information extraction framework that applies cascades of finite state automata over both surface text and syntactic dependency graphs. Support for syntactic patterns allow us to concisely define relations that are otherwise difficult to express in languages such as Common Pattern Specification Language (CPSL), which are currently limited to shallow linguistic features. The interaction of lexical and syntactic automata provides robustness and flexibility when writing extraction rules. This paper describes Odin’s declarative language for writing these cascaded automata.

pdf bib
Towards Using Social Media to Identify Individuals at Risk for Preventable Chronic Illness
Dane Bell | Daniel Fried | Luwen Huangfu | Mihai Surdeanu | Stephen Kobourov
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We describe a strategy for the acquisition of training data necessary to build a social-media-driven early detection system for individuals at risk for (preventable) type 2 diabetes mellitus (T2DM). The strategy uses a game-like quiz with data and questions acquired semi-automatically from Twitter. The questions are designed to inspire participant engagement and collect relevant data to train a public-health model applied to individuals. Prior systems designed to use social media such as Twitter to predict obesity (a risk factor for T2DM) operate on entire communities such as states, counties, or cities, based on statistics gathered by government agencies. Because there is considerable variation among individuals within these groups, training data on the individual level would be more effective, but this data is difficult to acquire. The approach proposed here aims to address this issue. Our strategy has two steps. First, we trained a random forest classifier on data gathered from (public) Twitter statuses and state-level statistics with state-of-the-art accuracy. We then converted this classifier into a 20-questions-style quiz and made it available online. In doing so, we achieved high engagement with individuals that took the quiz, while also building a training set of voluntarily supplied individual-level data for future classification.

2015

pdf bib
Higher-order Lexical Semantic Models for Non-factoid Answer Reranking
Daniel Fried | Peter Jansen | Gustave Hahn-Powell | Mihai Surdeanu | Peter Clark
Transactions of the Association for Computational Linguistics, Volume 3

Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Using a corpus of 10,000 questions from Yahoo! Answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models, across both word and syntactic representations. We show that an important criterion for success is controlling for the semantic drift that accumulates during graph traversal. All in all, the proposed higher-order approach improves five out of the six lexical semantic models investigated, with relative gains of up to +13% over their first-order variants.

pdf bib
A Domain-independent Rule-based Framework for Event Extraction
Marco A. Valenzuela-Escárcega | Gus Hahn-Powell | Mihai Surdeanu | Thomas Hicks
Proceedings of ACL-IJCNLP 2015 System Demonstrations

pdf bib
Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering
Rebecca Sharp | Peter Jansen | Mihai Surdeanu | Peter Clark
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Diamonds in the Rough: Event Extraction from Imperfect Microblog Data
Ander Intxaurrondo | Eneko Agirre | Oier Lopez de Lacalle | Mihai Surdeanu
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Two Practical Rhetorical Structure Theory Parsers
Mihai Surdeanu | Tom Hicks | Marco Antonio Valenzuela-Escárcega
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations

2014

pdf bib
On the Importance of Text Analysis for Stock Price Prediction
Heeyoung Lee | Mihai Surdeanu | Bill MacCartney | Dan Jurafsky
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We investigate the importance of text analysis for stock price prediction. In particular, we introduce a system that forecasts companies’ stock price changes (UP, DOWN, STAY) in response to financial events reported in 8-K documents. Our results indicate that using text boosts prediction accuracy over 10% (relative) over a strong baseline that incorporates many financially-rooted features. This impact is most important in the short term (i.e., the next day after the financial event) but persists for up to five days.

pdf bib
Event Extraction Using Distant Supervision
Kevin Reschke | Martin Jankowiak | Mihai Surdeanu | Christopher Manning | Daniel Jurafsky
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Distant supervision is a successful paradigm that gathers training data for information extraction systems by automatically aligning vast databases of facts with text. Previous work has demonstrated its usefulness for the extraction of binary relations such as a person’s employer or a film’s director. Here, we extend the distant supervision approach to template-based event extraction, focusing on the extraction of passenger counts, aircraft types, and other facts concerning airplane crash events. We present a new publicly available dataset and event extraction task in the plane crash domain based on Wikipedia infoboxes and newswire text. Using this dataset, we conduct a preliminary evaluation of four distantly supervised extraction models which assign named entity mentions in text to entries in the event template. Our results indicate that joint inference over sequences of candidate entity mentions is beneficial. Furthermore, we demonstrate that the Searn algorithm outperforms a linear-chain CRF and strong baselines with local inference.

pdf bib
Discourse Complements Lexical Semantics for Non-factoid Answer Reranking
Peter Jansen | Mihai Surdeanu | Peter Clark
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
The Stanford CoreNLP Natural Language Processing Toolkit
Christopher Manning | Mihai Surdeanu | John Bauer | Jenny Finkel | Steven Bethard | David McClosky
Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations

pdf bib
Extracting Latent Attributes from Video Scenes Using Text as Background Knowledge
Anh Tran | Mihai Surdeanu | Paul Cohen
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)

2013

pdf bib
Selectional Preferences for Semantic Role Classification
Beñat Zapirain | Eneko Agirre | Lluís Màrquez | Mihai Surdeanu
Computational Linguistics, Volume 39, Issue 3 - September 2013

pdf bib
Deterministic Coreference Resolution Based on Entity-Centric, Precision-Ranked Rules
Heeyoung Lee | Angel Chang | Yves Peirsman | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky
Computational Linguistics, Volume 39, Issue 4 - December 2013

2012

pdf bib
Multi-instance Multi-label Learning for Relation Extraction
Mihai Surdeanu | Julie Tibshirani | Ramesh Nallapati | Christopher D. Manning
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Joint Entity and Event Coreference Resolution across Documents
Heeyoung Lee | Marta Recasens | Angel Chang | Mihai Surdeanu | Dan Jurafsky
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Event Extraction as Dependency Parsing
David McClosky | Mihai Surdeanu | Christopher Manning
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Learning to Rank Answers to Non-Factoid Questions from Web Collections
Mihai Surdeanu | Massimiliano Ciaramita | Hugo Zaragoza
Computational Linguistics, Volume 37, Issue 2 - June 2011

pdf bib
Customizing an Information Extraction System to a New Domain
Mihai Surdeanu | David McClosky | Mason Smith | Andrey Gusev | Christopher Manning
Proceedings of the ACL 2011 Workshop on Relational Models of Semantics

pdf bib
Event Extraction as Dependency Parsing for BioNLP 2011
David McClosky | Mihai Surdeanu | Christopher Manning
Proceedings of BioNLP Shared Task 2011 Workshop

pdf bib
Model Combination for Event Extraction in BioNLP 2011
Sebastian Riedel | David McClosky | Mihai Surdeanu | Andrew McCallum | Christopher D. Manning
Proceedings of BioNLP Shared Task 2011 Workshop

pdf bib
Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task
Heeyoung Lee | Yves Peirsman | Angel Chang | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky
Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task

2010

pdf bib
A Multi-Pass Sieve for Coreference Resolution
Karthik Raghunathan | Heeyoung Lee | Sudarshan Rangarajan | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky | Christopher Manning
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Improving Semantic Role Classification with Selectional Preferences
Beñat Zapirain | Eneko Agirre | Lluís Màrquez | Mihai Surdeanu
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Ensemble Models for Dependency Parsing: Cheap and Good?
Mihai Surdeanu | Christopher D. Manning
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2009

pdf bib
Company-Oriented Extractive Summarization of Financial News
Katja Filippova | Mihai Surdeanu | Massimiliano Ciaramita | Hugo Zaragoza
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

pdf bib
The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages
Jan Hajič | Massimiliano Ciaramita | Richard Johansson | Daisuke Kawahara | Maria Antònia Martí | Lluís Màrquez | Adam Meyers | Joakim Nivre | Sebastian Padó | Jan Štěpánek | Pavel Straňák | Mihai Surdeanu | Nianwen Xue | Yi Zhang
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task

pdf bib
An Analysis of Bootstrapping for the Recognition of Temporal Expressions
Jordi Poveda | Mihai Surdeanu | Jordi Turmo
Proceedings of the NAACL HLT 2009 Workshop on Semi-supervised Learning for Natural Language Processing

2008

pdf bib
The CoNLL 2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies
Mihai Surdeanu | Richard Johansson | Adam Meyers | Lluís Màrquez | Joakim Nivre
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning

pdf bib
DeSRL: A Linear-Time Semantic Role Labeling System
Massimiliano Ciaramita | Giuseppe Attardi | Felice Dell’Orletta | Mihai Surdeanu
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning

pdf bib
Learning to Rank Answers on Large Online QA Collections
Mihai Surdeanu | Massimiliano Ciaramita | Hugo Zaragoza
Proceedings of ACL-08: HLT

2007

pdf bib
UPC: Experiments with Joint Learning within SemEval Task 9
Lluís Màrquez | Lluís Padró | Mihai Surdeanu | Luis Villarejo
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf bib
A Hybrid Approach for the Acquisition of Information Extraction Patterns
Mihai Surdeanu | Jordi Turmo | Alicia Ageno
Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006)

pdf bib
Projective Dependency Parsing with Perceptron
Xavier Carreras | Mihai Surdeanu | Lluís Màrquez
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)

2005

pdf bib
Semantic Role Labeling Using Complete Syntactic Analysis
Mihai Surdeanu | Jordi Turmo
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf bib
A Robust Combination Strategy for Semantic Role Labeling
Lluís Màrquez | Mihai Surdeanu | Pere Comas | Jordi Turmo
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2003

pdf bib
Using Predicate-Argument Structures for Information Extraction
Mihai Surdeanu | Sanda Harabagiu | John Williams | Paul Aarseth
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

2002

pdf bib
Performance Issues and Error Analysis in an Open-Domain Question Answering System
Dan Moldovan | Marius Pasca | Sanda Harabagiu | Mihai Surdeanu
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

2001

pdf bib
The Role of Lexico-Semantic Feedback in Open-Domain Textual Question-Answering
Sanda Harabagiu | Dan Moldovan | Marius Pasca | Rada Mihalcea | Mihai Surdeanu | Razvan Bunsecu | Roxana Girju | Vasile Rus | Paul Morarescu
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

Search
Co-authors