William Schuler


2022

pdf
Entropy- and Distance-Based Predictors From GPT-2 Attention Patterns Predict Reading Times Over and Above GPT-2 Surprisal
Byung-Doh Oh | William Schuler
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Transformer-based large language models are trained to make predictions about the next word by aggregating representations of previous tokens through their self-attention mechanism. In the field of cognitive modeling, such attention patterns have recently been interpreted as embodying the process of cue-based retrieval, in which attention over multiple targets is taken to generate interference and latency during retrieval. Under this framework, this work first defines an entropy-based predictor that quantifies the diffuseness of self-attention, as well as distance-based predictors that capture the incremental change in attention patterns across timesteps. Moreover, following recent studies that question the informativeness of attention weights, we also experiment with alternative methods for incorporating vector norms into attention weights. Regression experiments using predictors calculated from the GPT-2 language model show that these predictors deliver a substantially better fit to held-out self-paced reading and eye-tracking data over a rigorous baseline including GPT-2 surprisal.

2021

pdf
Depth-Bounded Statistical PCFG Induction as a Model of Human Grammar Acquisition
Lifeng Jin | Lane Schwartz | Finale Doshi-Velez | Timothy Miller | William Schuler
Computational Linguistics, Volume 47, Issue 1 - March 2021

Abstract This article describes a simple PCFG induction model with a fixed category domain that predicts a large majority of attested constituent boundaries, and predicts labels consistent with nearly half of attested constituent labels on a standard evaluation data set of child-directed speech. The article then explores the idea that the difference between simple grammars exhibited by child learners and fully recursive grammars exhibited by adult learners may be an effect of increasing working memory capacity, where the shallow grammars are constrained images of the recursive grammars. An implementation of these memory bounds as limits on center embedding in a depth-specific transform of a recursive grammar yields a significant improvement over an equivalent but unbounded baseline, suggesting that this arrangement may indeed confer a learning advantage.

pdf
Coreference-aware Surprisal Predicts Brain Response
Evan Jaffe | Byung-Doh Oh | William Schuler
Findings of the Association for Computational Linguistics: EMNLP 2021

Recent evidence supports a role for coreference processing in guiding human expectations about upcoming words during reading, based on covariation between reading times and word surprisal estimated by a coreference-aware semantic processing model (Jaffe et al. 2020).The present study reproduces and elaborates on this finding by (1) enabling the parser to process subword information that might better approximate human morphological knowledge, and (2) extending evaluation of coreference effects from self-paced reading to human brain imaging data. Results show that an expectation-based processing effect of coreference is still evident even in the presence of the stronger psycholinguistic baseline provided by the subword model, and that the coreference effect is observed in both self-paced reading and fMRI data, providing evidence of the effect’s robustness.

pdf
Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages
Lifeng Jin | Byung-Doh Oh | William Schuler
Findings of the Association for Computational Linguistics: EMNLP 2021

Unsupervised PCFG induction models, which build syntactic structures from raw text, can be used to evaluate the extent to which syntactic knowledge can be acquired from distributional information alone. However, many state-of-the-art PCFG induction models are word-based, meaning that they cannot directly inspect functional affixes, which may provide crucial information for syntactic acquisition in child learners. This work first introduces a neural PCFG induction model that allows a clean ablation of the influence of subword information in grammar induction. Experiments on child-directed speech demonstrate first that the incorporation of subword information results in more accurate grammars with categories that word-based induction models have difficulty finding, and second that this effect is amplified in morphologically richer languages that rely on functional affixes to express grammatical relations. A subsequent evaluation on multilingual treebanks shows that the model with subword information achieves state-of-the-art results on many languages, further supporting a distributional model of syntactic acquisition.

pdf
Surprisal Estimators for Human Reading Times Need Character Models
Byung-Doh Oh | Christian Clark | William Schuler
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

While the use of character models has been popular in NLP applications, it has not been explored much in the context of psycholinguistic modeling. This paper presents a character model that can be applied to a structural parser-based processing model to calculate word generation probabilities. Experimental results show that surprisal estimates from a structural processing model using this character model deliver substantially better fits to self-paced reading, eye-tracking, and fMRI data than those from large-scale language models trained on much more data. This may suggest that the proposed processing model provides a more humanlike account of sentence processing, which assumes a larger role of morphology, phonotactics, and orthographic complexity than was previously thought.

pdf
Contributions of Propositional Content and Syntactic Category Information in Sentence Processing
Byung-Doh Oh | William Schuler
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Expectation-based theories of sentence processing posit that processing difficulty is determined by predictability in context. While predictability quantified via surprisal has gained empirical support, this representation-agnostic measure leaves open the question of how to best approximate the human comprehender’s latent probability model. This work presents an incremental left-corner parser that incorporates information about both propositional content and syntactic categories into a single probability model. This parser can be trained to make parsing decisions conditioning on only one source of information, thus allowing a clean ablation of the relative contribution of propositional content and syntactic category information. Regression analyses show that surprisal estimates calculated from the full parser make a significant contribution to predicting self-paced reading times over those from the parser without syntactic category information, as well as a significant contribution to predicting eye-gaze durations over those from the parser without propositional content information. Taken together, these results suggest a role for propositional content and syntactic category information in incremental sentence processing.

2020

pdf
Grounded PCFG Induction with Images
Lifeng Jin | William Schuler
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Recent work in unsupervised parsing has tried to incorporate visual information into learning, but results suggest that these models need linguistic bias to compete against models that only rely on text. This work proposes grammar induction models which use visual information from images for labeled parsing, and achieve state-of-the-art results on grounded grammar induction on several languages. Results indicate that visual information is especially helpful in languages where high frequency words are more broadly distributed. Comparison between models with and without visual information shows that the grounded models are able to use visual information for proposing noun phrases, gathering useful information from images for unknown words, and achieving better performance at prepositional phrase attachment prediction.

pdf
Memory-bounded Neural Incremental Parsing for Psycholinguistic Prediction
Lifeng Jin | William Schuler
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies

Syntactic surprisal has been shown to have an effect on human sentence processing, and can be predicted from prefix probabilities of generative incremental parsers. Recent state-of-the-art incremental generative neural parsers are able to produce accurate parses and surprisal values but have unbounded stack memory, which may be used by the neural parser to maintain explicit in-order representations of all previously parsed words, inconsistent with results of human memory experiments. In contrast, humans seem to have a bounded working memory, demonstrated by inhibited performance on word recall in multi-clause sentences (Bransford and Franks, 1971), and on center-embedded sentences (Miller and Isard,1964). Bounded statistical parsers exist, but are less accurate than neural parsers in predict-ing reading times. This paper describes a neural incremental generative parser that is able to provide accurate surprisal estimates and can be constrained to use a bounded stack. Results show that the accuracy gains of neural parsers can be reliably extended to psycholinguistic modeling without risk of distortion due to un-bounded working memory.

pdf
The Importance of Category Labels in Grammar Induction with Child-directed Utterances
Lifeng Jin | William Schuler
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies

Recent progress in grammar induction has shown that grammar induction is possible without explicit assumptions of language specific knowledge. However, evaluation of induced grammars usually has ignored phrasal labels, an essential part of a grammar. Experiments in this work using a labeled evaluation metric, RH, show that linguistically motivated predictions about grammar sparsity and use of categories can only be revealed through labeled evaluation. Furthermore, depth-bounding as an implementation of human memory constraints in grammar inducers is still effective with labeled evaluation on multilingual transcribed child-directed utterances.

pdf
A Corpus of Encyclopedia Articles with Logical Forms
Nathan Rasmussen | William Schuler
Proceedings of the Twelfth Language Resources and Evaluation Conference

People can extract precise, complex logical meanings from text in documents such as tax forms and game rules, but language processing systems lack adequate training and evaluation resources to do these kinds of tasks reliably. This paper describes a corpus of annotated typed lambda calculus translations for approximately 2,000 sentences in Simple English Wikipedia, which is assumed to constitute a broad-coverage domain for precise, complex descriptions. The corpus described in this paper contains a large number of quantifiers and interesting scoping configurations, and is presented specifically as a resource for quantifier scope disambiguation systems, but also more generally as an object of linguistic study.

pdf
Coreference information guides human expectations during natural reading
Evan Jaffe | Cory Shain | William Schuler
Proceedings of the 28th International Conference on Computational Linguistics

Models of human sentence processing effort tend to focus on costs associated with retrieving structures and discourse referents from memory (memory-based) and/or on costs associated with anticipating upcoming words and structures based on contextual cues (expectation-based) (Levy,2008). Although evidence suggests that expectation and memory may play separable roles in language comprehension (Levy et al., 2013), theories of coreference processing have largely focused on memory: how comprehenders identify likely referents of linguistic expressions. In this study, we hypothesize that coreference tracking also informs human expectations about upcoming words, and we test this hypothesis by evaluating the degree to which incremental surprisal measures generated by a novel coreference-aware semantic parser explain human response times in a naturalistic self-paced reading experiment. Results indicate (1) that coreference information indeed guides human expectations and (2) that coreference effects on memory retrieval may exist independently of coreference effects on expectations. Together, these findings suggest that the language processing system exploits coreference information both to retrieve referents from memory and to anticipate upcoming material.

2019

pdf
Unsupervised Learning of PCFGs with Normalizing Flow
Lifeng Jin | Finale Doshi-Velez | Timothy Miller | Lane Schwartz | William Schuler
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Unsupervised PCFG inducers hypothesize sets of compact context-free rules as explanations for sentences. PCFG induction not only provides tools for low-resource languages, but also plays an important role in modeling language acquisition (Bannard et al., 2009; Abend et al. 2017). However, current PCFG induction models, using word tokens as input, are unable to incorporate semantics and morphology into induction, and may encounter issues of sparse vocabulary when facing morphologically rich languages. This paper describes a neural PCFG inducer which employs context embeddings (Peters et al., 2018) in a normalizing flow model (Dinh et al., 2015) to extend PCFG induction to use semantic and morphological information. Linguistically motivated sparsity and categorical distance constraints are imposed on the inducer as regularization. Experiments show that the PCFG induction model with normalizing flow produces grammars with state-of-the-art accuracy on a variety of different languages. Ablation further shows a positive effect of normalizing flow, context embeddings and proposed regularizers.

pdf
Variance of Average Surprisal: A Better Predictor for Quality of Grammar from Unsupervised PCFG Induction
Lifeng Jin | William Schuler
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In unsupervised grammar induction, data likelihood is known to be only weakly correlated with parsing accuracy, especially at convergence after multiple runs. In order to find a better indicator for quality of induced grammars, this paper correlates several linguistically- and psycholinguistically-motivated predictors to parsing accuracy on a large multilingual grammar induction evaluation data set. Results show that variance of average surprisal (VAS) better correlates with parsing accuracy than data likelihood and that using VAS instead of data likelihood for model selection provides a significant accuracy boost. Further evidence shows VAS to be a better candidate than data likelihood for predicting word order typology classification. Analyses show that VAS seems to separate content words from function words in natural language grammars, and to better arrange words with different frequencies into separate classes that are more consistent with linguistic theory.

2018

pdf
Unsupervised Grammar Induction with Depth-bounded PCFG
Lifeng Jin | Finale Doshi-Velez | Timothy Miller | William Schuler | Lane Schwartz
Transactions of the Association for Computational Linguistics, Volume 6

There has been recent interest in applying cognitively- or empirically-motivated bounds on recursion depth to limit the search space of grammar induction models (Ponvert et al., 2011; Noji and Johnson, 2016; Shain et al., 2016). This work extends this depth-bounding approach to probabilistic context-free grammar induction (DB-PCFG), which has a smaller parameter space than hierarchical sequence models, and therefore more fully exploits the space reductions of depth-bounding. Results for this model on grammar acquisition from transcribed child-directed speech and newswire text exceed or are competitive with those of other models when evaluated on parse accuracy. Moreover, grammars acquired from this model demonstrate a consistent use of category labels, something which has not been demonstrated by other acquisition models.

pdf
Test Sets for Chinese Nonlocal Dependency Parsing
Manjuan Duan | William Schuler
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Deconvolutional Time Series Regression: A Technique for Modeling Temporally Diffuse Effects
Cory Shain | William Schuler
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Researchers in computational psycholinguistics frequently use linear models to study time series data generated by human subjects. However, time series may violate the assumptions of these models through temporal diffusion, where stimulus presentation has a lingering influence on the response as the rest of the experiment unfolds. This paper proposes a new statistical model that borrows from digital signal processing by recasting the predictors and response as convolutionally-related signals, using recent advances in machine learning to fit latent impulse response functions (IRFs) of arbitrary shape. A synthetic experiment shows successful recovery of true latent IRFs, and psycholinguistic experiments reveal plausible, replicable, and fine-grained estimates of latent temporal dynamics, with comparable or improved prediction quality to widely-used alternatives.

pdf
Depth-bounding is effective: Improvements and evaluation of unsupervised PCFG induction
Lifeng Jin | Finale Doshi-Velez | Timothy Miller | William Schuler | Lane Schwartz
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

There have been several recent attempts to improve the accuracy of grammar induction systems by bounding the recursive complexity of the induction model. Modern depth-bounded grammar inducers have been shown to be more accurate than early unbounded PCFG inducers, but this technique has never been compared against unbounded induction within the same system, in part because most previous depth-bounding models are built around sequence models, the complexity of which grows exponentially with the maximum allowed depth. The present work instead applies depth bounds within a chart-based Bayesian PCFG inducer, where bounding can be switched on and off, and then samples trees with or without bounding. Results show that depth-bounding is indeed significantly effective in limiting the search space of the inducer and thereby increasing accuracy of resulting parsing model, independent of the contribution of modern Bayesian induction techniques. Moreover, parsing results on English, Chinese and German show that this bounded model is able to produce parse trees more accurately than or competitively with state-of-the-art constituency grammar induction models.

pdf bib
Coreference and Focus in Reading Times
Evan Jaffe | Cory Shain | William Schuler
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)

2017

pdf bib
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
Ted Gibson | Tal Linzen | Asad Sayeed | Martin van Schijndel | William Schuler
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)

2016

pdf
Addressing surprisal deficiencies in reading time models
Marten van Schijndel | William Schuler
Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)

This study demonstrates a weakness in how n-gram and PCFG surprisal are used to predict reading times in eye-tracking data. In particular, the information conveyed by words skipped during saccades is not usually included in the surprisal measures. This study shows that correcting the surprisal calculation improves n-gram surprisal and that upcoming n-grams affect reading times, replicating previous findings of how lexical frequencies affect reading times. In contrast, the predictivity of PCFG surprisal does not benefit from the surprisal correction despite the fact that lexical sequences skipped by saccades are processed by readers, as demonstrated by the corrected n-gram measure. These results raise questions about the formulation of information-theoretic measures of syntactic processing such as PCFG surprisal and entropy reduction when applied to reading times.

pdf
Memory access during incremental sentence processing causes reading time latency
Cory Shain | Marten van Schijndel | Richard Futrell | Edward Gibson | William Schuler
Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli. Our study addresses these concerns by comparing several implementations of prominent sentence processing theories on an exploratory corpus and evaluating the most successful of these on a confirmatory corpus, using a new self-paced reading corpus of seemingly natural narratives constructed to contain an unusually high proportion of memory-intensive constructions. We show highly significant and complementary broad-coverage latency effects both for predictors based on the Dependency Locality Theory and for predictors based on a left-corner parsing model of sentence processing. Our results indicate that memory access during sentence processing does take time, but suggest that stimuli requiring many memory access events may be necessary in order to observe the effect.

pdf
OCLSP at SemEval-2016 Task 9: Multilayered LSTM as a Neural Semantic Dependency Parser
Lifeng Jin | Manjuan Duan | William Schuler
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
OSU_CHGCG at SemEval-2016 Task 9 : Chinese Semantic Dependency Parsing with Generalized Categorial Grammar
Manjuan Duan | Lifeng Jin | William Schuler
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf
Memory-Bounded Left-Corner Unsupervised Grammar Induction on Child-Directed Input
Cory Shain | William Bryce | Lifeng Jin | Victoria Krakovna | Finale Doshi-Velez | Timothy Miller | William Schuler | Lane Schwartz
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

This paper presents a new memory-bounded left-corner parsing model for unsupervised raw-text syntax induction, using unsupervised hierarchical hidden Markov models (UHHMM). We deploy this algorithm to shed light on the extent to which human language learners can discover hierarchical syntax through distributional statistics alone, by modeling two widely-accepted features of human language acquisition and sentence processing that have not been simultaneously modeled by any existing grammar induction algorithm: (1) a left-corner parsing strategy and (2) limited working memory capacity. To model realistic input to human language learners, we evaluate our system on a corpus of child-directed speech rather than typical newswire corpora. Results beat or closely match those of three competing systems.

2015

pdf
A Comparison of Word Similarity Performance Using Explanatory and Non-explanatory Texts
Lifeng Jin | William Schuler
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Hierarchic syntax improves reading time prediction
Marten van Schijndel | William Schuler
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Interpreting Questions with a Log-Linear Ranking Model in a Virtual Patient Dialogue System
Evan Jaffe | Michael White | William Schuler | Eric Fosler-Lussier | Alex Rosenfeld | Douglas Danforth
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications

pdf
Evidence of syntactic working memory usage in MEG data
Marten van Schijndel | Brian Murphy | William Schuler
Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics

pdf
Parsing Chinese with a Generalized Categorial Grammar
Manjuan Duan | William Schuler
Proceedings of the Grammar Engineering Across Frameworks (GEAF) 2015 Workshop

2014

pdf
Cognitive Compositional Semantics using Continuation Dependencies
William Schuler | Adam Wheeler
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)

pdf
Sentence Processing in a Vectorial Model of Working Memory
William Schuler
Proceedings of the Fifth Workshop on Cognitive Modeling and Computational Linguistics

2013

pdf
An Analysis of Memory-based Processing Costs using Incremental Deep Syntactic Dependency Parsing
Marten van Schijndel | Luan Nguyen | William Schuler
Proceedings of the Fourth Annual Workshop on Cognitive Modeling and Computational Linguistics (CMCL)

pdf
An Analysis of Frequency- and Memory-Based Processing Costs
Marten van Schijndel | William Schuler
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2012

pdf
Connectionist-Inspired Incremental PCFG Parsing
Marten van Schijndel | Andy Exley | William Schuler
Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2012)

pdf
Accurate Unbounded Dependency Recovery using Generalized Categorial Grammars
Luan Nguyen | Marten Van Schijndel | William Schuler
Proceedings of COLING 2012

2011

pdf
Structured Composition of Semantic Vectors
Stephen Wu | William Schuler
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf
Tree-Rewriting Models of Multi-Word Expressions
William Schuler | Aravind Joshi
Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World

pdf
Incremental Syntactic Language Models for Phrase-based Translation
Lane Schwartz | Chris Callison-Burch | William Schuler | Stephen Wu
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models
Dingcheng Li | Tim Miller | William Schuler
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
Complexity Metrics in an Incremental Right-Corner Parser
Stephen Wu | Asaf Bachrach | Carlos Cardenas | William Schuler
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Broad-Coverage Parsing Using Human-Like Memory Constraints
William Schuler | Samir AbdelRahman | Tim Miller | Lane Schwartz
Computational Linguistics, Volume 36, Number 1, March 2010

pdf
HHMM Parsing with Limited Parallelism
Tim Miller | William Schuler
Proceedings of the 2010 Workshop on Cognitive Modeling and Computational Linguistics

pdf bib
Incremental Parsing in Bounded Memory
William Schuler
Proceedings of the 10th International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+10)

2009

pdf bib
Articles: A Framework for Fast Incremental Interpretation during Speech Decoding
William Schuler | Stephen Wu | Lane Schwartz
Computational Linguistics, Volume 35, Number 3, September 2009

pdf
Positive Results for Parsing with a Bounded Stack using a Model-Based Right-Corner Transform
William Schuler
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Parsing Speech Repair without Specialized Grammar Symbols
Tim Miller | Luan Nguyen | William Schuler
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf
A Unified Syntactic Model for Parsing Fluent and Disfluent Speech
Tim Miller | William Schuler
Proceedings of ACL-08: HLT, Short Papers

pdf
A Syntactic Time-Series Model for Parsing Fluent and Disfluent Speech
Tim Miller | William Schuler
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Toward a Psycholinguistically-Motivated Model of Language Processing
William Schuler | Samir AbdelRahman | Tim Miller | Lane Schwartz
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2003

pdf
Using Model-Theoretic Semantic Interpretation to Guide Statistical Parsing and Word Recognition in a Spoken Language Interface
William Schuler
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

2002

pdf
Interleaved Semantic Interpretation in Environment-based Parsing
William Schuler
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf
Computational Properties of Environment-based Disambiguation
William Schuler
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

2000

pdf
Multi-Component TAG and Notions of Formal Power
William Schuler | David Chiang | Mark Dras
Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics

pdf
Some remarks on an extension of synchronous TAG
David Chiang | William Schuler | Mark Dras
Proceedings of the Fifth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+5)

pdf
Building a class-based verb lexicon using TAGs
Karin Kipper | Hoa Trang Dang | William Schuler | Martha Palmer
Proceedings of the Fifth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+5)

pdf
A machine translation system from English to American Sign Language
Liwei Zhao | Karin Kipper | William Schuler | Christian Vogler | Norman Badler | Martha Palmer
Proceedings of the Fourth Conference of the Association for Machine Translation in the Americas: Technical Papers

Research in computational linguistics, computer graphics and autonomous agents has led to the development of increasingly sophisticated communicative agents over the past few years, bringing new perspective to machine translation research. The engineering of language- based smooth, expressive, natural-looking human gestures can give us useful insights into the design principles that have evolved in natural communication between people. In this paper we prototype a machine translation system from English to American Sign Language (ASL), taking into account not only linguistic but also visual and spatial information associated with ASL signs.

1999

pdf
Preserving Semantic Dependencies in Synchronous Tree Adjoining Grammar
William Schuler
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics

1998

pdf
Restrictions on Tree Adjoining Languages
Giorgio Satta | William Schuler
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
Restrictions on Tree Adjoining Languages
Giorgio Satta | William Schuler
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
Exploiting semantic dependencies in parsing
William Schuler
Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4)