Kentaro Torisawa


2021

pdf
BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks
Jong-Hoon Oh | Ryu Iida | Julien Kloetzer | Kentaro Torisawa
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Transformer-based language models (TLMs), such as BERT, ALBERT and GPT-3, have shown strong performance in a wide range of NLP tasks and currently dominate the field of NLP. However, many researchers wonder whether these models can maintain their dominance forever. Of course, we do not have answers now, but, as an attempt to find better neural architectures and training schemes, we pretrain a simple CNN using a GAN-style learning scheme and Wikipedia data, and then integrate it with standard TLMs. We show that on the GLUE tasks, the combination of our pretrained CNN with ALBERT outperforms the original ALBERT and achieves a similar performance to that of SOTA. Furthermore, on open-domain QA (Quasar-T and SearchQA), the combination of the CNN with ALBERT or RoBERTa achieved stronger performance than SOTA and the original TLMs. We hope that this work provides a hint for developing a novel strong network architecture along with its training scheme. Our source code and models are available at https://github.com/nict-wisdom/bertac.

2020

pdf
Understanding User Utterances in a Dialog System for Caregiving
Yoshihiko Asao | Julien Kloetzer | Junta Mizuno | Dai Saiki | Kazuma Kadowaki | Kentaro Torisawa
Proceedings of the Twelfth Language Resources and Evaluation Conference

A dialog system that can monitor the health status of seniors has a huge potential for solving the labor force shortage in the caregiving industry in aging societies. As a part of efforts to create such a system, we are developing two modules that are aimed to correctly interpret user utterances: (i) a yes/no response classifier, which categorizes responses to health-related yes/no questions that the system asks; and (ii) an entailment recognizer, which detects users’ voluntary mentions about their health status. To apply machine learning approaches to the development of the modules, we created large annotated datasets of 280,467 question-response pairs and 38,868 voluntary utterances. For question-response pairs, we asked annotators to avoid direct “yes” or “no” answers, so that our data could cover a wide range of possible natural language responses. The two modules were implemented by fine-tuning a BERT model, which is a recent successful neural network model. For the yes/no response classifier, the macro-average of the average precisions (APs) over all of our four categories (Yes/No/Unknown/Other) was 82.6% (96.3% for “yes” responses and 91.8% for “no” responses), while for the entailment recognizer it was 89.9%.

2019

pdf
Event Causality Recognition Exploiting Multiple Annotators’ Judgments and Background Knowledge
Kazuma Kadowaki | Ryu Iida | Kentaro Torisawa | Jong-Hoon Oh | Julien Kloetzer
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose new BERT-based methods for recognizing event causality such as “smoke cigarettes” –> “die of lung cancer” written in web texts. In our methods, we grasp each annotator’s policy by training multiple classifiers, each of which predicts the labels given by a single annotator, and combine the resulting classifiers’ outputs to predict the final labels determined by majority vote. Furthermore, we investigate the effect of supplying background knowledge to our classifiers. Since BERT models are pre-trained with a large corpus, some sort of background knowledge for event causality may be learned during pre-training. Our experiments with a Japanese dataset suggest that this is actually the case: Performance improved when we pre-trained the BERT models with web texts containing a large number of event causalities instead of Wikipedia articles or randomly sampled web texts. However, this effect was limited. Therefore, we further improved performance by simply adding texts related to an input causality candidate as background knowledge to the input of the BERT models. We believe these findings indicate a promising future research direction.

pdf
Open-Domain Why-Question Answering with Adversarial Learning to Encode Answer Texts
Jong-Hoon Oh | Kazuma Kadowaki | Julien Kloetzer | Ryu Iida | Kentaro Torisawa
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In this paper, we propose a method for why-question answering (why-QA) that uses an adversarial learning framework. Existing why-QA methods retrieve “answer passages” that usually consist of several sentences. These multi-sentence passages contain not only the reason sought by a why-question and its connection to the why-question, but also redundant and/or unrelated parts. We use our proposed “Adversarial networks for Generating compact-answer Representation” (AGR) to generate from a passage a vector representation of the non-redundant reason sought by a why-question and exploit the representation for judging whether the passage actually answers the why-question. Through a series of experiments using Japanese why-QA datasets, we show that these representations improve the performance of our why-QA neural model as well as that of a BERT-based why-QA model. We show that they also improve a state-of-the-art distantly supervised open-domain QA (DS-QA) method on publicly available English datasets, even though the target task is not a why-QA.

2018

pdf
Annotating Zero Anaphora for Question Answering
Yoshihiko Asao | Ryu Iida | Kentaro Torisawa
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf
Intra-Sentential Subject Zero Anaphora Resolution using Multi-Column Convolutional Neural Network
Ryu Iida | Kentaro Torisawa | Jong-Hoon Oh | Canasai Kruengkrai | Julien Kloetzer
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
WISDOM X, DISAANA and D-SUMM: Large-scale NLP Systems for Analyzing Textual Big Data
Junta Mizuno | Masahiro Tanaka | Kiyonori Ohtake | Jong-Hoon Oh | Julien Kloetzer | Chikara Hashimoto | Kentaro Torisawa
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations

We demonstrate our large-scale NLP systems: WISDOM X, DISAANA, and D-SUMM. WISDOM X provides numerous possible answers including unpredictable ones to widely diverse natural language questions to provide deep insights about a broad range of issues. DISAANA and D-SUMM enable us to assess the damage caused by large-scale disasters in real time using Twitter as an information source.

pdf
DISAANA and D-SUMM: Large-scale Real Time NLP Systems for Analyzing Disaster Related Reports in Tweets
Kentaro Torisawa
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

This talk presents two NLP systems that were developed for helping disaster victims and rescue workers in the aftermath of large-scale disasters. DISAANA provides answers to questions such as “What is in short supply in Tokyo?” and displays locations related to each answer on a map. D-SUMM automatically summarizes a large number of disaster related reports concerning a specified area and helps rescue workers to understand disaster situations from a macro perspective. Both systems are publicly available as Web services. In the aftermath of the 2016 Kumamoto Earthquake (M7.0), the Japanese government actually used DISAANA to analyze the situation.

2015

pdf
Recognizing Complex Negation on Twitter
Junta Mizuno | Canasai Kruengkrai | Kiyonori Ohtake | Chikara Hashimoto | Kentaro Torisawa | Julien Kloetzer | Kentaro Inui
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation

pdf
Large-Scale Acquisition of Entailment Pattern Pairs by Exploiting Transitivity
Julien Kloetzer | Kentaro Torisawa | Chikara Hashimoto | Jong-Hoon Oh
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf
Intra-sentential Zero Anaphora Resolution using Subject Sharing Recognition
Ryu Iida | Kentaro Torisawa | Chikara Hashimoto | Jong-Hoon Oh | Julien Kloetzer
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf
Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
Chikara Hashimoto | Kentaro Torisawa | Julien Kloetzer | Motoki Sano | István Varga | Jong-Hoon Oh | Yutaka Kidawara
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Million-scale Derivation of Semantic Relations from a Manually Constructed Predicate Taxonomy
Motoki Sano | Kentaro Torisawa | Julien Kloetzer | Chikara Hashimoto | István Varga | Jong-Hoon Oh
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf
Two-Stage Method for Large-Scale Acquisition of Contradiction Pattern Pairs using Entailment
Julien Kloetzer | Stijn De Saeger | Kentaro Torisawa | Chikara Hashimoto | Jong-Hoon Oh | Motoki Sano | Kiyonori Ohtake
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Minimally Supervised Method for Multilingual Paraphrase Extraction from Definition Sentences on the Web
Yulan Yan | Chikara Hashimoto | Kentaro Torisawa | Takao Kawai | Jun’ichi Kazama | Stijn De Saeger
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations
Kentaro Torisawa | Hang Li
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

pdf
NICT Disaster Information Analysis System
Kiyonori Ohtake | Jun Goto | Stijn De Saeger | Kentaro Torisawa | Junta Mizuno | Kentaro Inui
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

pdf
WISDOM2013: A Large-scale Web Information Analysis System
Masahiro Tanaka | Stijn De Saeger | Kiyonori Ohtake | Chikara Hashimoto | Makoto Hijiya | Hideaki Fujii | Kentaro Torisawa
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

pdf
Aid is Out There: Looking for Help from Tweets during a Large Scale Disaster
István Varga | Motoki Sano | Kentaro Torisawa | Chikara Hashimoto | Kiyonori Ohtake | Takao Kawai | Jong-Hoon Oh | Stijn De Saeger
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Why-Question Answering using Intra- and Inter-Sentential Causal Relations
Jong-Hoon Oh | Kentaro Torisawa | Chikara Hashimoto | Motoki Sano | Stijn De Saeger | Kiyonori Ohtake
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf
Why Question Answering using Sentiment Analysis and Word Classes
Jong-Hoon Oh | Kentaro Torisawa | Chikara Hashimoto | Takuya Kawada | Stijn De Saeger | Jun’ichi Kazama | Yiou Wang
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf
Excitatory or Inhibitory: A New Semantic Orientation Extracts Contradiction and Causality from the Web
Chikara Hashimoto | Kentaro Torisawa | Stijn De Saeger | Jong-Hoon Oh | Jun’ichi Kazama
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf
Chinese Evaluative Information Analysis
Yiou Wang | Jun’ichi Kazama | Takuya Kawada | Kentaro Torisawa
Proceedings of COLING 2012

2011

pdf
Improving Chinese Word Segmentation and POS Tagging with Semi-supervised Methods Using Large Auto-Analyzed Data
Yiou Wang | Jun’ichi Kazama | Yoshimasa Tsuruoka | Wenliang Chen | Yujie Zhang | Kentaro Torisawa
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
Similarity Based Language Model Construction for Voice Activated Open-Domain Question Answering
István Varga | Kiyonori Ohtake | Kentaro Torisawa | Stijn De Saeger | Teruhisa Misu | Shigeki Matsuda | Jun’ichi Kazama
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
Extending WordNet with Hypernyms and Siblings Acquired from Wikipedia
Ichiro Yamada | Jong-Hoon Oh | Chikara Hashimoto | Kentaro Torisawa | Jun’ichi Kazama | Stijn De Saeger | Takuya Kawada
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
Toward Finding Semantic Relations not Written in a Single Sentence: An Inference Method using Auto-Discovered Rules
Masaaki Tsuchida | Kentaro Torisawa | Stijn De Saeger | Jong-Hoon Oh | Jun’ichi Kazama | Chikara Hashimoto | Hayato Ohwada
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
SMT Helps Bitext Dependency Parsing
Wenliang Chen | Jun’ichi Kazama | Min Zhang | Yoshimasa Tsuruoka | Yujie Zhang | Yiou Wang | Kentaro Torisawa | Haizhou Li
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Relation Acquisition using Word Classes and Partial Patterns
Stijn De Saeger | Kentaro Torisawa | Masaaki Tsuchida | Jun’ichi Kazama | Chikara Hashimoto | Ichiro Yamada | Jong Hoon Oh | Istvan Varga | Yulan Yan
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Extracting Paraphrases from Definition Sentences on the Web
Chikara Hashimoto | Kentaro Torisawa | Stijn De Saeger | Jun’ichi Kazama | Sadao Kurohashi
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf
Co-STAR: A Co-training Style Algorithm for Hyponymy Relation Acquisition from Structured and Unstructured Text
Jong-Hoon Oh | Ichiro Yamada | Kentaro Torisawa | Stijn De Saeger
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf
Improving Graph-based Dependency Parsing with Decision History
Wenliang Chen | Jun’ichi Kazama | Yoshimasa Tsuruoka | Kentaro Torisawa
Coling 2010: Posters

pdf
Bitext Dependency Parsing with Bilingual Subtree Constraints
Wenliang Chen | Jun’ichi Kazama | Kentaro Torisawa
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf
A Bayesian Method for Robust Estimation of Distributional Similarities
Jun’ichi Kazama | Stijn De Saeger | Kow Kuroda | Masaki Murata | Kentaro Torisawa
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf
A Look inside the Distributionally Similar Terms
Kow Kuroda | Jun’ichi Kazama | Kentaro Torisawa
Proceedings of the Second Workshop on NLP Challenges in the Information Explosion Era (NLPIX 2010)

pdf
Using Various Features in Machine Learning to Obtain High Levels of Performance for Recognition of Japanese Notational Variants
Masahiro Kojima | Masaki Murata | Jun’ichi Kazama | Kow Kuroda | Atsushi Fujita | Eiji Aramaki | Masaaki Tsuchida | Yasuhiko Watanabe | Kentaro Torisawa
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

pdf
Generation of Summaries that Appropriately and Adequately Express the Contents of Original Documents Using Word-Association Knowledge
Kazuki Takigawa | Masaki Murata | Masaaki Tsuchida | Stijn De Saeger | Kazuhide Yamamoto | Kentaro Torisawa
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

pdf
Adapting Chinese Word Segmentation for Machine Translation Based on Short Units
Yiou Wang | Kiyotaka Uchimoto | Jun’ichi Kazama | Canasai Kruengkrai | Kentaro Torisawa
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In Chinese texts, words composed of single or multiple characters are not separated by spaces, unlike most western languages. Therefore Chinese word segmentation is considered an important first step in machine translation (MT) and its performance impacts MT results. Many factors affect Chinese word segmentations, including the segmentation standards and segmentation strategies. The performance of a corpus-based word segmentation model depends heavily on the quality and the segmentation standard of the training corpora. However, we observed that existing manually annotated Chinese corpora tend to have low segmentation granularity and provide poor morphological information due to the present segmentation standards. In this paper, we introduce a short-unit standard of Chinese word segmentation, which is particularly suitable for machine translation, and propose a semi-automatic method of transforming the existing corpora into the ones that can satisfy our standards. We evaluate the usefulness of our approach on the basis of translation tasks from the technology newswire domain and the scientific paper domain, and demonstrate that it significantly improves the performance of Chinese-Japanese machine translation (over 1.0 BLEU increase).

2009

pdf bib
Monolingual knowledge acquisition and a multilingual information environment
Kentaro Torisawa
Proceedings of the 6th International Workshop on Spoken Language Translation: Plenaries

pdf
Bilingual Co-Training for Monolingual Hyponymy-Relation Acquisition
Jong-Hoon Oh | Kiyotaka Uchimoto | Kentaro Torisawa
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf
An Error-Driven Word-Character Hybrid Model for Joint Chinese Word Segmentation and POS Tagging
Canasai Kruengkrai | Kiyotaka Uchimoto | Jun’ichi Kazama | Yiou Wang | Kentaro Torisawa | Hitoshi Isahara
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf
Improving Dependency Parsing with Subtrees from Auto-Parsed Data
Wenliang Chen | Jun’ichi Kazama | Kiyotaka Uchimoto | Kentaro Torisawa
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Can Chinese Phonemes Improve Machine Transliteration?: A Comparative Study of English-to-Chinese Transliteration Models
Jong-Hoon Oh | Kiyotaka Uchimoto | Kentaro Torisawa
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Hypernym Discovery Based on Distributional Similarity and Hierarchical Structures
Ichiro Yamada | Kentaro Torisawa | Jun’ichi Kazama | Kow Kuroda | Masaki Murata | Stijn De Saeger | Francis Bond | Asuka Sumida
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Large-Scale Verb Entailment Acquisition from the Web
Chikara Hashimoto | Kentaro Torisawa | Kow Kuroda | Stijn De Saeger | Masaki Murata | Jun’ichi Kazama
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Multilingual Dependency Learning: Exploiting Rich Features for Tagging Syntactic and Semantic Dependencies
Hai Zhao | Wenliang Chen | Jun’ichi Kazama | Kiyotaka Uchimoto | Kentaro Torisawa
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task

pdf
Machine Transliteration using Target-Language Grapheme and Phoneme: Multi-engine Transliteration Approach
Jong-Hoon Oh | Kiyotaka Uchimoto | Kentaro Torisawa
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)

2008

pdf
Looking for Trouble
Stijn De Saeger | Kentaro Torisawa | Jun’ichi Kazama
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

pdf
Hacking Wikipedia for Hyponymy Relation Acquisition
Asuka Sumida | Kentaro Torisawa
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

pdf
Boosting Precision and Recall of Hyponymy Relation Acquisition from Hierarchical Layouts in Wikipedia
Asuka Sumida | Naoki Yoshinaga | Kentaro Torisawa
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper proposes an extension of Sumida and Torisawa’s method of acquiring hyponymy relations from hierachical layouts in Wikipedia (Sumida and Torisawa, 2008). We extract hyponymy relation candidates (HRCs) from the hierachical layouts in Wikipedia by regarding all subordinate items of an item x in the hierachical layouts as x’s hyponym candidates, while Sumida and Torisawa (2008) extracted only direct subordinate items of an item x as x’s hyponym candidates. We then select plausible hyponymy relations from the acquired HRCs by running a filter based on machine learning with novel features, which even improve the precision of the resulting hyponymy relations. Experimental results show that we acquired more than 1.34 million hyponymy relations with a precision of 90.1%.

pdf
Inducing Gazetteers for Named Entity Recognition by Large-Scale Clustering of Dependency Relations
Jun’ichi Kazama | Kentaro Torisawa
Proceedings of ACL-08: HLT

2007

pdf
A New Perceptron Algorithm for Sequence Labeling with Non-Local Features
Jun’ichi Kazama | Kentaro Torisawa
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf
Exploiting Wikipedia as External Knowledge for Named Entity Recognition
Jun’ichi Kazama | Kentaro Torisawa
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf
Acquiring Inference Rules with Temporal Constraints by Using Japanese Coordinated Sentences and Noun-Verb Co-occurrences
Kentaro Torisawa
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf
Semantic Role Recognition Using Kernels on Weighted Marked Ordered Labeled Trees
Jun’ichi Kazama | Kentaro Torisawa
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)

2005

pdf
Speeding up Training with Tree Kernels for Node Relation Labeling
Jun’ichi Kazama | Kentaro Torisawa
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf
Automatic Discovery of Attribute Words from Web Documents
Kosuke Tokunaga | Jun’ichi Kazama | Kentaro Torisawa
Second International Joint Conference on Natural Language Processing: Full Papers

2004

pdf
Extracting Hyponyms of Prespecified Hypernyms from Itemizations and Headings in Web Documents
Keiji Shinzato | Kentaro Torisawa
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf
Acquiring Hyponymy Relations from Web Documents
Keiji Shinzato | Kentaro Torisawa
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf
Improving the Identification of Non-Anaphoric it using Support Vector Machines
José Carlos Clemente Litrán | Kenji Satou | Kentaro Torisawa
Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)

2003

pdf
Comparison between CFG Filtering Techniques for LTAG and HPSG
Naoki Yoshinaga | Kentaro Torisawa | Jun’ichi Tsujii
The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics

2002

pdf
An Unsupervised Learning Method for Associative Relationships between Verb Phrases
Kentaro Torisawa
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf
Resource Sharing Amongst HPSG and LTAG Communities by a Method of Grammar Conversion between FB-LTAG and HPSG
Naoki Yoshinaga | Yusuke Miyao | Kentaro Torisawa | Jun’ichi Tsujii
Proceedings of the ACL 2001 Workshop on Sharing Tools and Resources

2000

pdf
A Hybrid Japanese Parser with Hand-crafted Grammar and Statistics
Hiroshi Kanayama | Kentaro Torisawa | Yutaka Mitsuishi | Jun’ichi Tsujii
COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics

1998

pdf
LiLFeS - Towards a Practical HPSG Parser
Takaki Makino | Minoru Yoshida | Kentaro Torisawa | Jun’ichi Tsujii
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
HPSG-Style Underspecified Japanese Grammar with Wide Coverage
Yutaka Mitsuishi | Kentaro Torisawa | Jun’ichi Tsujii
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
An Efficient Parallel Substrate for Typed Feature Structures on Shared Memory Parallel Machines
Takashi Ninomiya | Kentaro Torisawa | Jun’ichi Tsujii
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf
Packing of feature structures for optimizing the HPSG-style grammar translated from TAG
Yusuke Miyao | Kentaro Torisawa | Yuka Tateisi | Jun’ichi Tsujii
Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4)

pdf
Translating the XTAG English grammar to HPSG
Yuka Tateisi | Kentaro Torisawa | Yusuke Miyao | Jun’ichi Tsujii
Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4)

pdf
LiLFeS- Towards a Practical HPSG Parser
Takaki Makino | Minoru Yoshida | Kentaro Torisawa | Jun’ichi Tsujii
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
HPSG-Style Underspecified Japanese Grammar with Wide Coverage
Yutaka Mitsuishi | Kentaro Torisawa | Jun’ichi Tsujii
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
An Efficient Parallel Substrate for Typed Feature Structures on Shared Memory Parallel Machines
Takashi Ninomiya | Kentaro Torisawa | Jun’ichi Tsujii
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

1996

pdf
Computing Phrasal-signs in HPSG prior to Parsing
Kentaro Torisawa | Jun’ichi Tsujii
COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics

1995

pdf
An HPSG-based Parser for Automatic Knowledge Acquisition
Kentaro Torisawa | Jun’ichi Tsujii
Proceedings of the Fourth International Workshop on Parsing Technologies