Takashi Ninomiya


2024

pdf
Transfer Fine-tuning for Quality Estimation of Text Simplification
Yuki Hironaka | Tomoyuki Kajiwara | Takashi Ninomiya
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

To efficiently train quality estimation of text simplification on a small-scale labeled corpus, we train sentence difficulty estimation prior to fine-tuning the pre-trained language models. Our proposed method improves the quality estimation of text simplification in the framework of transfer fine-tuning, in which pre-trained language models can improve the performance of the target task by additional training on the relevant task prior to fine-tuning. Since the labeled corpus for quality estimation of text simplification is small (600 sentence pairs), an efficient training method is desired. Therefore, we propose a training method for pseudo quality estimation that does not require labels for quality estimation. As a relevant task for quality estimation of text simplification, we train the estimation of sentence difficulty. This is a binary classification task that identifies which sentence is simpler using an existing parallel corpus for text simplification. Experimental results on quality estimation of English text simplification showed that not only the quality estimation performance on simplicity that was trained, but also the quality estimation performance on fluency and meaning preservation could be improved in some cases.

pdf
Utilizing Longer Context than Speech Bubbles in Automated Manga Translation
Hiroto Kaino | Soichiro Sugihara | Tomoyuki Kajiwara | Takashi Ninomiya | Joshua B. Tanner | Shonosuke Ishiwatari
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

This paper focuses on improving the performance of machine translation for manga (Japanese-style comics). In manga machine translation, text consists of a sequence of speech bubbles and each speech bubble is translated individually. However, each speech bubble itself does not contain sufficient information for translation. Therefore, previous work has proposed methods to use contextual information, such as the previous speech bubble, speech bubbles within the same scene, and corresponding scene images. In this research, we propose two new approaches to capture broader contextual information. Our first approach involves scene-based translation that considers the previous scene. The second approach considers broader context information, including details about the work, author, and manga genre. Through our experiments, we confirm that each of our methods improves translation quality, with the combination of both methods achieving the highest quality. Additionally, detailed analysis reveals the effect of zero-anaphora resolution in translation, such as supplying missing subjects not mentioned within a scene, highlighting the usefulness of longer contextual information in manga machine translation.

2023

pdf
Multimodal Neural Machine Translation Using Synthetic Images Transformed by Latent Diffusion Model
Ryoya Yuasa | Akihiro Tamura | Tomoyuki Kajiwara | Takashi Ninomiya | Tsuneo Kato
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)

This study proposes a new multimodal neural machine translation (MNMT) model using synthetic images transformed by a latent diffusion model. MNMT translates a source language sentence based on its related image, but the image usually contains noisy information that are not relevant to the source language sentence. Our proposed method first generates a synthetic image corresponding to the content of the source language sentence by using a latent diffusion model and then performs translation based on the synthetic image. The experiments on the English-German translation tasks using the Multi30k dataset demonstrate the effectiveness of the proposed method.

pdf
Distractor Generation for Fill-in-the-Blank Exercises by Question Type
Nana Yoshimi | Tomoyuki Kajiwara | Satoru Uchida | Yuki Arase | Takashi Ninomiya
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)

This study addresses the automatic generation of distractors for English fill-in-the-blank exercises in the entrance examinations for Japanese universities. While previous studies applied the same method to all questions, actual entrance examinations have multiple question types that reflect the purpose of the questions. Therefore, we define three types of questions (grammar, function word, and context) and propose a method to generate distractors according to the characteristics of each question type. Experimental results on 500 actual questions show the effectiveness of the proposed method for both automatic and manual evaluation.

pdf
Automated Orthodontic Diagnosis from a Summary of Medical Findings
Takumi Ohtsuka | Tomoyuki Kajiwara | Chihiro Tanikawa | Yuujin Shimizu | Hajime Nagahara | Takashi Ninomiya
Proceedings of the 5th Clinical Natural Language Processing Workshop

We propose a method to automate orthodontic diagnosis with natural language processing. It is worthwhile to assist dentists with such technology to prevent errors by inexperienced dentists and to reduce the workload of experienced ones. However, text length and style inconsistencies in medical findings make an automated orthodontic diagnosis with deep-learning models difficult. In this study, we improve the performance of automatic diagnosis utilizing short summaries of medical findings written in a consistent style by experienced dentists. Experimental results on 970 Japanese medical findings show that summarization consistently improves the performance of various machine learning models for automated orthodontic diagnosis. Although BERT is the model that gains the most performance with the proposed method, the convolutional neural network achieved the best performance.

pdf bib
Mitigating Domain Mismatch in Machine Translation via Paraphrasing
Hyuga Koretaka | Tomoyuki Kajiwara | Atsushi Fujita | Takashi Ninomiya
Proceedings of the 10th Workshop on Asian Translation

Quality of machine translation (MT) deteriorates significantly when translating texts having characteristics that differ from the training data, such as content domain. Although previous studies have focused on adapting MT models on a bilingual parallel corpus in the target domain, this approach is not applicable when no parallel data are available for the target domain or when utilizing black-box MT systems. To mitigate problems caused by such domain mismatch without relying on any corpus in the target domain, this study proposes a method to search for better translations by paraphrasing input texts of MT. To obtain better translations even for input texts from unforeknown domains, we generate their multiple paraphrases, translate each, and rerank the resulting translations to select the most likely one. Experimental results on Japanese-to-English translation reveal that the proposed method improves translation quality in terms of BLEU score for input texts from specific domains.

2022

pdf
Controllable Text Simplification with Deep Reinforcement Learning
Daiki Yanamoto | Tomoki Ikawa | Tomoyuki Kajiwara | Takashi Ninomiya | Satoru Uchida | Yuki Arase
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We propose a method for controlling the difficulty of a sentence based on deep reinforcement learning. Although existing models are trained based on the word-level difficulty, the sentence-level difficulty has not been taken into account in the loss function. Our proposed method generates sentences of appropriate difficulty for the target audience through reinforcement learning using a reward calculated based on the difference between the difficulty of the output sentence and the target difficulty. Experimental results of English text simplification show that the proposed method achieves a higher performance than existing approaches. Compared to previous studies, the proposed method can generate sentences whose grade-levels are closer to those of human references estimated using a fine-tuned pre-trained model.

pdf bib
Emotional Intensity Estimation based on Writer’s Personality
Haruya Suzuki | Sora Tarumoto | Tomoyuki Kajiwara | Takashi Ninomiya | Yuta Nakashima | Hajime Nagahara
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop

We propose a method for personalized emotional intensity estimation based on a writer’s personality test for Japanese SNS posts. Existing emotion analysis models are difficult to accurately estimate the writer’s subjective emotions behind the text. We personalize the emotion analysis using not only the text but also the writer’s personality information. Experimental results show that personality information improves the performance of emotional intensity estimation. Furthermore, a hybrid model combining the existing personalized method with ours achieved state-of-the-art performance.

pdf bib
Parallel Corpus Filtering for Japanese Text Simplification
Koki Hatagaki | Tomoyuki Kajiwara | Takashi Ninomiya
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)

We propose a method of parallel corpus filtering for Japanese text simplification. The parallel corpus for this task contains some redundant wording. In this study, we first identify the type and size of noisy sentence pairs in the Japanese text simplification corpus. We then propose a method of parallel corpus filtering to remove each type of noisy sentence pair. Experimental results show that filtering the training parallel corpus with the proposed method improves simplification performance.

pdf
A Benchmark Dataset for Multi-Level Complexity-Controllable Machine Translation
Kazuki Tani | Ryoya Yuasa | Kazuki Takikawa | Akihiro Tamura | Tomoyuki Kajiwara | Takashi Ninomiya | Tsuneo Kato
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This paper presents a new benchmark test dataset for multi-level complexity-controllable machine translation (MLCC-MT), which is MT controlling the complexity of the output at more than two levels. In previous research, MLCC-MT models have been evaluated on a test dataset automatically constructed from the Newsela corpus, which is a document-level comparable corpus with document-level complexity. The existing test dataset has the following three problems: (i) A source language sentence and its target language sentence are not necessarily an exact translation pair because they are automatically detected. (ii) A target language sentence and its simplified target language sentence are not necessarily exactly parallel because they are automatically aligned. (iii) A sentence-level complexity is not necessarily appropriate because it is transferred from an article-level complexity attached to the Newsela corpus. Therefore, we create a benchmark test dataset for Japanese-to-English MLCC-MT from the Newsela corpus by introducing an automatic filtering of data with inappropriate sentence-level complexity, manual check for parallel target language sentences with different complexity levels, and manual translation. Moreover, we implement two MLCC-NMT frameworks with a Transformer architecture and report their performance on our test dataset as baselines for future research. Our test dataset and codes are released.

pdf
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain
Haruya Suzuki | Yuto Miyauchi | Kazuki Akiyama | Tomoyuki Kajiwara | Takashi Ninomiya | Noriko Takemura | Yuta Nakashima | Hajime Nagahara
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We annotate 35,000 SNS posts with both the writer’s subjective sentiment polarity labels and the reader’s objective ones to construct a Japanese sentiment analysis dataset. Our dataset includes intensity labels (none, weak, medium, and strong) for each of the eight basic emotions by Plutchik (joy, sadness, anticipation, surprise, anger, fear, disgust, and trust) as well as sentiment polarity labels (strong positive, positive, neutral, negative, and strong negative). Previous studies on emotion analysis have studied the analysis of basic emotions and sentiment polarity independently. In other words, there are few corpora that are annotated with both basic emotions and sentiment polarity. Our dataset is the first large-scale corpus to annotate both of these emotion labels, and from both the writer’s and reader’s perspectives. In this paper, we analyze the relationship between basic emotion intensity and sentiment polarity on our dataset and report the results of benchmarking sentiment polarity classification.

pdf
A Japanese Masked Language Model for Academic Domain
Hiroki Yamauchi | Tomoyuki Kajiwara | Marie Katsurai | Ikki Ohmukai | Takashi Ninomiya
Proceedings of the Third Workshop on Scholarly Document Processing

We release a pretrained Japanese masked language model for an academic domain. Pretrained masked language models have recently improved the performance of various natural language processing applications. In domains such as medical and academic, which include a lot of technical terms, domain-specific pretraining is effective. While domain-specific masked language models for medical and SNS domains are widely used in Japanese, along with domain-independent ones, pretrained models specific to the academic domain are not publicly available. In this study, we pretrained a RoBERTa-based Japanese masked language model on paper abstracts from the academic database CiNii Articles. Experimental results on Japanese text classification in the academic domain revealed the effectiveness of the proposed model over existing pretrained models.

pdf bib
Comparing BERT-based Reward Functions for Deep Reinforcement Learning in Machine Translation
Yuki Nakatani | Tomoyuki Kajiwara | Takashi Ninomiya
Proceedings of the 9th Workshop on Asian Translation

In text generation tasks such as machine translation, models are generally trained using cross-entropy loss. However, mismatches between the loss function and the evaluation metric are often problematic. It is known that this problem can be addressed by direct optimization to the evaluation metric with reinforcement learning. In machine translation, previous studies have used BLEU to calculate rewards for reinforcement learning, but BLEU is not well correlated with human evaluation. In this study, we investigate the impact on machine translation quality through reinforcement learning based on evaluation metrics that are more highly correlated with human evaluation. Experimental results show that reinforcement learning with BERT-based rewards can improve various evaluation metrics.

pdf
Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation
Yuto Kuroda | Tomoyuki Kajiwara | Yuki Arase | Takashi Ninomiya
Proceedings of the 29th International Conference on Computational Linguistics

We propose a method to distill language-agnostic meaning embeddings from multilingual sentence encoders for unsupervised quality estimation of machine translation. Our method facilitates that the meaning embeddings focus on semantics by adversarial training that attempts to eliminate language-specific information. Experimental results on unsupervised quality estimation reveal that our method achieved higher correlations with human evaluations.

2021

pdf
Hie-BART: Document Summarization with Hierarchical BART
Kazuki Akiyama | Akihiro Tamura | Takashi Ninomiya
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i.e., sentence-word structures) in the BART model. Although the existing BART model has achieved a state-of-the-art performance on document summarization tasks, the model does not have the interactions between sentence-level information and word-level information. In machine translation tasks, the performance of neural machine translation models has been improved by incorporating multi-granularity self-attention (MG-SA), which captures the relationships between words and phrases. Inspired by the previous work, the proposed Hie-BART model incorporates MG-SA into the encoder of the BART model for capturing sentence-word structures. Evaluations on the CNN/Daily Mail dataset show that the proposed Hie-BART model outperforms some strong baselines and improves the performance of a non-hierarchical BART model (+0.23 ROUGE-L).

pdf
Utterance Position-Aware Dialogue Act Recognition
Yuki Yano | Akihiro Tamura | Takashi Ninomiya | Hiroaki Obayashi
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

This study proposes an utterance position-aware approach for a neural network-based dialogue act recognition (DAR) model, which incorporates positional encoding for utterance’s absolute or relative position. The proposed approach is inspired by the observation that some dialogue acts have tendencies of occurrence positions. The evaluations on the Switchboard corpus show that the proposed positional encoding of utterances statistically significantly improves the performance of DAR.

pdf
Grammatical Error Correction via Supervised Attention in the Vicinity of Errors
Hiromichi Ishii | Akihiro Tamura | Takashi Ninomiya
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

pdf
Synchronous Syntactic Attention for Transformer Neural Machine Translation
Hiroyuki Deguchi | Akihiro Tamura | Takashi Ninomiya
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop

This paper proposes a novel attention mechanism for Transformer Neural Machine Translation, “Synchronous Syntactic Attention,” inspired by synchronous dependency grammars. The mechanism synchronizes source-side and target-side syntactic self-attentions by minimizing the difference between target-side self-attentions and the source-side self-attentions mapped by the encoder-decoder attention matrix. The experiments show that the proposed method improves the translation performance on WMT14 En-De, WMT16 En-Ro, and ASPEC Ja-En (up to +0.38 points in BLEU).

2020

pdf
A Visually-Grounded Parallel Corpus with Phrase-to-Region Linking
Hideki Nakayama | Akihiro Tamura | Takashi Ninomiya
Proceedings of the Twelfth Language Resources and Evaluation Conference

Visually-grounded natural language processing has become an important research direction in the past few years. However, majorities of the available cross-modal resources (e.g., image-caption datasets) are built in English and cannot be directly utilized in multilingual or non-English scenarios. In this study, we present a novel multilingual multimodal corpus by extending the Flickr30k Entities image-caption dataset with Japanese translations, which we name Flickr30k Entities JP (F30kEnt-JP). To the best of our knowledge, this is the first multilingual image-caption dataset where the captions in the two languages are parallel and have the shared annotations of many-to-many phrase-to-region linking. We believe that phrase-to-region as well as phrase-to-phrase supervision can play a vital role in fine-grained grounding of language and vision, and will promote many tasks such as multilingual image captioning and multimodal machine translation. To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.

pdf
Transformer-based Approach for Predicting Chemical Compound Structures
Yutaro Omote | Kyoumoto Matsushita | Tomoya Iwakura | Akihiro Tamura | Takashi Ninomiya
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

By predicting chemical compound structures from their names, we can better comprehend chemical compounds written in text and identify the same chemical compound given different notations for database creation. Previous methods have predicted the chemical compound structures from their names and represented them by Simplified Molecular Input Line Entry System (SMILES) strings. However, these methods mainly apply handcrafted rules, and cannot predict the structures of chemical compound names not covered by the rules. Instead of handcrafted rules, we propose Transformer-based models that predict SMILES strings from chemical compound names. We improve the conventional Transformer-based model by introducing two features: (1) a loss function that constrains the number of atoms of each element in the structure, and (2) a multi-task learning approach that predicts both SMILES strings and InChI strings (another string representation of chemical compound structures). In evaluation experiments, our methods achieved higher F-measures than previous rule-based approaches (Open Parser for Systematic IUPAC Nomenclature and two commercially used products), and the conventional Transformer-based model. We release the dataset used in this paper as a benchmark for the future research.

pdf
Bilingual Subword Segmentation for Neural Machine Translation
Hiroyuki Deguchi | Masao Utiyama | Akihiro Tamura | Takashi Ninomiya | Eiichiro Sumita
Proceedings of the 28th International Conference on Computational Linguistics

This paper proposed a new subword segmentation method for neural machine translation, “Bilingual Subword Segmentation,” which tokenizes sentences to minimize the difference between the number of subword units in a sentence and that of its translation. While existing subword segmentation methods tokenize a sentence without considering its translation, the proposed method tokenizes a sentence by using subword units induced from bilingual sentences; this method could be more favorable to machine translation. Evaluations on WAT Asian Scientific Paper Excerpt Corpus (ASPEC) English-to-Japanese and Japanese-to-English translation tasks and WMT14 English-to-German and German-to-English translation tasks show that our bilingual subword segmentation improves the performance of Transformer neural machine translation (up to +0.81 BLEU).

pdf
Supervised Visual Attention for Multimodal Neural Machine Translation
Tetsuro Nishihara | Akihiro Tamura | Takashi Ninomiya | Yutaro Omote | Hideki Nakayama
Proceedings of the 28th International Conference on Computational Linguistics

This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image. The proposed visual attention mechanism captures the relationship between a word and an image region more precisely than a conventional visual attention mechanism trained through MNMT in an unsupervised manner. Our experiments on English-German and German-English translation tasks using the Multi30k dataset and on English-Japanese and Japanese-English translation tasks using the Flickr30k Entities JP dataset show that a Transformer-based MNMT model can be improved by incorporating our proposed supervised visual attention mechanism and that further improvements can be achieved by combining it with a supervised cross-lingual attention mechanism (up to +1.61 BLEU, +1.7 METEOR).

2019

pdf
Dependency-Based Self-Attention for Transformer NMT
Hiroyuki Deguchi | Akihiro Tamura | Takashi Ninomiya
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)

In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based self-attention. The dependency-based self-attention is trained to attend to the modifiee for each token under constraints based on the dependency relations, inspired by Linguistically-Informed Self-Attention (LISA). While LISA is originally proposed for Transformer encoder for semantic role labeling, this paper extends LISA to Transformer NMT by masking future information on words in the decoder-side dependency-based self-attention. Additionally, our dependency-based self-attention operates at sub-word units created by byte pair encoding. The experiments show that our model improves 1.0 BLEU points over the baseline model on the WAT’18 Asian Scientific Paper Excerpt Corpus Japanese-to-English translation task.

pdf
Dependency-Based Relative Positional Encoding for Transformer NMT
Yutaro Omote | Akihiro Tamura | Takashi Ninomiya
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)

This paper proposes a new Transformer neural machine translation model that incorporates syntactic distances between two source words into the relative position representations of the self-attention mechanism. In particular, the proposed model encodes pair-wise relative depths on a source dependency tree, which are differences between the depths of the two source words, in the encoder’s self-attention. The experiments show that our proposed model achieves 0.5 point gain in BLEU on the Asian Scientific Paper Excerpt Corpus Japanese-to-English translation task.

pdf
Multi-Task Learning for Chemical Named Entity Recognition with Chemical Compound Paraphrasing
Taiki Watanabe | Akihiro Tamura | Takashi Ninomiya | Takuya Makino | Tomoya Iwakura
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose a method to improve named entity recognition (NER) for chemical compounds using multi-task learning by jointly training a chemical NER model and a chemical com- pound paraphrase model. Our method en- ables the long short-term memory (LSTM) of the NER model to capture chemical com- pound paraphrases by sharing the parameters of the LSTM and character embeddings be- tween the two models. The experimental re- sults on the BioCreative IV’s CHEMDNER task show that our method improves chemi- cal NER and achieves state-of-the-art perfor- mance.

2018

pdf
Neural Machine Translation Incorporating Named Entity
Arata Ugawa | Akihiro Tamura | Takashi Ninomiya | Hiroya Takamura | Manabu Okumura
Proceedings of the 27th International Conference on Computational Linguistics

This study proposes a new neural machine translation (NMT) model based on the encoder-decoder model that incorporates named entity (NE) tags of source-language sentences. Conventional NMT models have two problems enumerated as follows: (i) they tend to have difficulty in translating words with multiple meanings because of the high ambiguity, and (ii) these models’abilitytotranslatecompoundwordsseemschallengingbecausetheencoderreceivesaword, a part of the compound word, at each time step. To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word. Furthermore,the encoder introduces a chunk-level LSTM layer over a word-level LSTM layer and hierarchically encodes a source-language sentence to capture a compound NE as a chunk on the basis of the NE tags. We evaluate the proposed model on an English-to-Japanese translation task with the ASPEC, and English-to-Bulgarian and English-to-Romanian translation tasks with the Europarl corpus. The evaluation results show that the proposed model achieves up to 3.11 point improvement in BLEU.

2017

pdf bib
CKY-based Convolutional Attention for Neural Machine Translation
Taiki Watanabe | Akihiro Tamura | Takashi Ninomiya
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

This paper proposes a new attention mechanism for neural machine translation (NMT) based on convolutional neural networks (CNNs), which is inspired by the CKY algorithm. The proposed attention represents every possible combination of source words (e.g., phrases and structures) through CNNs, which imitates the CKY table in the algorithm. NMT, incorporating the proposed attention, decodes a target sentence on the basis of the attention scores of the hidden states of CNNs. The proposed attention enables NMT to capture alignments from underlying structures of a source sentence without sentence parsing. The evaluations on the Asian Scientific Paper Excerpt Corpus (ASPEC) English-Japanese translation task show that the proposed attention gains 0.66 points in BLEU.

2016

pdf
Domain Specific Named Entity Recognition Referring to the Real World by Deep Neural Networks
Suzushi Tomori | Takashi Ninomiya | Shinsuke Mori
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2015

pdf
Acquiring distributed representations for verb-object pairs by using word2vec
Miki Iwai | Takashi Ninomiya | Kyo Kageura
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters

pdf
Resampling approach for instance-based domain adaptation from patent domain to newspaper domain in statistical machine translation
Keisuke Noguchi | Takashi Ninomiya
Proceedings of the 6th Workshop on Patent and Scientific Literature Translation

2009

pdf
Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification
Takashi Ninomiya | Takuya Matsuzaki | Nobuyuki Shimizu | Hiroshi Nakagawa
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

2007

pdf
A log-linear model with an n-gram reference distribution for accurate HPSG parsing
Takashi Ninomiya | Takuya Matsuzaki | Yusuke Miyao | Jun’ichi Tsujii
Proceedings of the Tenth International Conference on Parsing Technologies

2006

pdf
Semantic Retrieval for the Accurate Identification of Relational Concepts in Massive Textbases
Yusuke Miyao | Tomoko Ohta | Katsuya Masuda | Yoshimasa Tsuruoka | Kazuhiro Yoshida | Takashi Ninomiya | Jun’ichi Tsujii
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf
Trimming CFG Parse Trees for Sentence Compression Using Machine Learning Approaches
Yuya Unno | Takashi Ninomiya | Yusuke Miyao | Jun’ichi Tsujii
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

pdf
An Intelligent Search Engine and GUI-based Efficient MEDLINE Search Tool Based on Deep Syntactic Parsing
Tomoko Ohta | Yusuke Miyao | Takashi Ninomiya | Yoshimasa Tsuruoka | Akane Yakushiji | Katsuya Masuda | Jumpei Takeuchi | Kazuhiro Yoshida | Tadayoshi Hara | Jin-Dong Kim | Yuka Tateisi | Jun’ichi Tsujii
Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions

pdf
Extremely Lexicalized Models for Accurate and Fast HPSG Parsing
Takashi Ninomiya | Takuya Matsuzaki | Yoshimasa Tsuruoka | Yusuke Miyao | Jun’ichi Tsujii
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2005

pdf
Efficacy of Beam Thresholding, Unification Filtering and Hybrid Parsing in Probabilistic HPSG Parsing
Takashi Ninomiya | Yoshimasa Tsuruoka | Yusuke Miyao | Jun’ichi Tsujii
Proceedings of the Ninth International Workshop on Parsing Technology

2003

pdf
A Robust Retrieval Engine for Proximal and Structural Search
Katsuya Masuda | Takashi Ninomiya | Yusuke Miyao | Tomoko Ohta | Jun’ichi Tsujii
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers

pdf
Lexicalized Grammar Acquisition
Yusuke Miyao | Takashi Ninomiya | Jun’ichi Tsujii
10th Conference of the European Chapter of the Association for Computational Linguistics

2002

pdf
Lenient Default Unification for Robust Processing within Unification Based Grammar Formalisms
Takashi Ninomiya | Yusuke Miyao | Jun-Ichi Tsujii
COLING 2002: The 19th International Conference on Computational Linguistics

pdf
An Indexing Scheme for Typed Feature Structures
Takashi Ninomiya | Takaki Makino | Jun-Ichi Tsujii
COLING 2002: The 17th International Conference on Computational Linguistics: Project Notes

1998

pdf
An Efficient Parallel Substrate for Typed Feature Structures on Shared Memory Parallel Machines
Takashi Ninomiya | Kentaro Torisawa | Jun’ichi Tsujii
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf
An Efficient Parallel Substrate for Typed Feature Structures on Shared Memory Parallel Machines
Takashi Ninomiya | Kentaro Torisawa | Jun’ichi Tsujii
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2