Shumin Wu


Can Selectional Preferences Help Automatic Semantic Role Labeling?
Shumin Wu | Martha Palmer
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

Improving Chinese-English PropBank Alignment
Shumin Wu | Martha Palmer
Proceedings of the Ninth Workshop on Syntax, Semantics and Structure in Statistical Translation


Focusing Annotation for Semantic Role Labeling
Daniel Peterson | Martha Palmer | Shumin Wu
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Annotation of data is a time-consuming process, but necessary for many state-of-the-art solutions to NLP tasks, including semantic role labeling (SRL). In this paper, we show that language models may be used to select sentences that are more useful to annotate. We simulate a situation where only a portion of the available data can be annotated, and compare language model based selection against a more typical baseline of randomly selected data. The data is ordered using an off-the-shelf language modeling toolkit. We show that the least probable sentences provide dramatic improved system performance over the baseline, especially when only a small portion of the data is annotated. In fact, the lion’s share of the performance can be attained by annotating only 10-20% of the data. This result holds for training a model based on new annotation, as well as when adding domain-specific annotation to a general corpus for domain adaptation.


Semantic Role Labeling
Martha Palmer | Ivan Titov | Shumin Wu
NAACL HLT 2013 Tutorial Abstracts


Semantic Mapping Using Automatic Word Alignment and Semantic Role Labeling
Shumin Wu | Martha Palmer
Proceedings of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation


Detecting Cross-lingual Semantic Similarity Using Parallel PropBanks
Shumin Wu | Jinho Choi | Martha Palmer
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

This paper suggests a method for detecting cross-lingual semantic similarity using parallel PropBanks. We begin by improving word alignments for verb predicates generated by GIZA++ by using information available in parallel PropBanks. We applied the Kuhn-Munkres method to measure predicate-argument matching and improved verb predicate alignments by an F-score of 12.6%. Using the enhanced word alignments we checked the set of target verbs aligned to a specific source verb for semantic consistency. For a set of English verbs aligned to a Chinese verb, we checked if the English verbs belong to the same semantic class using an existing lexical database, WordNet. For a set of Chinese verbs aligned to an English verb we manually checked semantic similarity between the Chinese verbs within a set. Our results show that the verb sets we generated have a high correlation with semantic classes. This could potentially lead to an automatic technique for generating semantic classes for verbs.