Yutaka Sasaki


2024

pdf
Enhancing Syllabic Component Classification in Japanese Sign Language by Pre-training on Non-Japanese Sign Language Data
Jundai Inoue | Makoto Miwa | Yutaka Sasaki | Daisuke Hara
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources

2023

pdf
Distantly Supervised Document-Level Biomedical Relation Extraction with Neighborhood Knowledge Graphs
Takuma Matsubara | Makoto Miwa | Yutaka Sasaki
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

We propose a novel distantly supervised document-level biomedical relation extraction model that uses partial knowledge graphs that include the graph neighborhood of the entities appearing in each input document. Most conventional distantly supervised relation extraction methods use only the entity relations automatically annotated by using knowledge base entries. They do not fully utilize the rich information in the knowledge base, such as entities other than the target entities and the network of heterogeneous entities defined in the knowledge base. To address this issue, our model integrates the representations of the entities acquired from the neighborhood knowledge graphs with the representations of the input document. We conducted experiments on the ChemDisGene dataset using Comparative Toxicogenomics Database (CTD) for document-level relation extraction with respect to interactions between drugs, diseases, and genes. Experimental results confirmed the performance improvement by integrating entities and their neighborhood biochemical information from the knowledge base.

pdf
Biomedical Relation Extraction with Entity Type Markers and Relation-specific Question Answering
Koshi Yamada | Makoto Miwa | Yutaka Sasaki
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

Recently, several methods have tackled the relation extraction task with QA and have shown successful results. However, the effectiveness of existing methods in specific domains, such as the biomedical domain, is yet to be verified. When there are multiple entity pairs that share an entity in a sentence, a QA-based relation extraction model that outputs only one single answer to a given question may not extract desired relations. In addition, these methods employ QA models that are not tuned for relation extraction. To address these issues, we first extend and apply a span QA-based relation extraction method to the drug-protein relation extraction by creating question templates and incorporating entity type markers. We further propose a binary QA-based method that directly uses the entity information available in the relation extraction task. The experimental results on the DrugProt dataset show that our QA-based methods, especially the proposed binary QA method, are effective for drug-protein relation extraction.

pdf
Biomedical Document Classification with Literature Graph Representations of Bibliographies and Entities
Ryuki Ida | Makoto Miwa | Yutaka Sasaki
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

This paper proposes a new document classification method that incorporates the representations of a literature graph created from bibliographic and entity information. Recently, document classification performance has been significantly improved with large pre-trained language models; however, there still remain documents that are difficult to classify. External information, such as bibliographic information, citation links, descriptions of entities, and medical taxonomies, has been considered one of the keys to dealing with such documents in document classification. Although several document classification methods using external information have been proposed, they only consider limited relationships, e.g., word co-occurrence and citation relationships. However, there are multiple types of external information. To overcome the limitation of the conventional use of external information, we propose a document classification model that simultaneously considers bibliographic and entity information to deeply model the relationships among documents using the representations of the literature graph. The experimental results show that our proposed method outperforms existing methods on two document classification datasets in the biomedical domain with the help of the literature graph.

2022

pdf
Improving Supervised Drug-Protein Relation Extraction with Distantly Supervised Models
Naoki Iinuma | Makoto Miwa | Yutaka Sasaki
Proceedings of the 21st Workshop on Biomedical Language Processing

This paper proposes novel drug-protein relation extraction models that indirectly utilize distant supervision data. Concretely, instead of adding distant supervision data to the manually annotated training data, our models incorporate distantly supervised models that are relation extraction models trained with distant supervision data. Distantly supervised learning has been proposed to generate a large amount of pseudo-training data at low cost. However, there is still a problem of low prediction performance due to the inclusion of mislabeled data. Therefore, several methods have been proposed to suppress the effects of noisy cases by utilizing some manually annotated training data. However, their performance is lower than that of supervised learning on manually annotated data because mislabeled data that cannot be fully suppressed becomes noise when training the model. To overcome this issue, our methods indirectly utilize distant supervision data with manually annotated training data. The experimental results on the DrugProt corpus in the BioCreative VII Track 1 showed that our proposed model can consistently improve the supervised models in different settings.

2021

pdf
A Neural Edge-Editing Approach for Document-Level Relation Graph Extraction
Kohei Makino | Makoto Miwa | Yutaka Sasaki
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Ontology-Style Relation Annotation: A Case Study
Savong Bou | Naoki Suzuki | Makoto Miwa | Yutaka Sasaki
Proceedings of the Twelfth Language Resources and Evaluation Conference

This paper proposes an Ontology-Style Relation (OSR) annotation approach. In conventional Relation Extraction (RE) datasets, relations are annotated as links between entity mentions. In contrast, in our OSR annotation, a relation is annotated as a relation mention (i.e., not a link but a node) and domain and range links are annotated from the relation mention to its argument entity mentions. We expect the following benefits: (1) the relation annotations can be easily converted to Resource Description Framework (RDF) triples to populate an Ontology, (2) some part of conventional RE tasks can be tackled as Named Entity Recognition (NER) tasks. The relation classes are limited to several RDF properties such as domain, range, and subClassOf, and (3) OSR annotations can be clear documentations of Ontology contents. As a case study, we converted an in-house corpus of Japanese traffic rules in conventional annotations into the OSR annotations and built a novel OSR-RoR (Rules of the Road) corpus. The inter-annotator agreements of the conversion were 85-87%. We evaluated the performance of neural NER and RE tools on the conventional and OSR annotations. The experimental results showed that the OSR annotations make the RE task easier while introducing slight complexity into the NER task.

pdf
SC-CoMIcs: A Superconductivity Corpus for Materials Informatics
Kyosuke Yamaguchi | Ryoji Asahi | Yutaka Sasaki
Proceedings of the Twelfth Language Resources and Evaluation Conference

This paper describes a novel corpus tailored for the text mining of superconducting materials in Materials Informatics (MI), named SuperConductivety Corpus for Materials Informatics (SC-CoMIcs). Different from biomedical informatics, there exist very few corpora targeting Materials Science and Engineering (MSE). Especially, there is no sizable corpus which can be used to assist the search of superconducting materials. A team of materials scientists and natural language processing experts jointly designed the annotation and constructed a corpus consisting of manually-annotated 1,000 MSE abstracts related to superconductivity. We conducted experiments on the corpus with a neural Named Entity Recognition (NER) tool. The experimental results show that NER performance over the corpus is around 77% in terms of micro-F1, which is comparable to human annotator agreement rates. Using the trained NER model, we automatically annotated 9,000 abstracts and created a term retrieval tool based on the term similarity. This tool can find superconductivity terms relevant to a query term within a specified Named Entity category, which demonstrates the power of our SC-CoMIcs, efficiently providing knowledge for Materials Informatics applications from rapidly expanding publications.

2018

pdf
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
Masaki Asada | Makoto Miwa | Yutaka Sasaki
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.

2017

pdf
Analyzing Well-Formedness of Syllables in Japanese Sign Language
Satoshi Yawata | Makoto Miwa | Yutaka Sasaki | Daisuke Hara
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

This paper tackles a problem of analyzing the well-formedness of syllables in Japanese Sign Language (JSL). We formulate the problem as a classification problem that classifies syllables into well-formed or ill-formed. We build a data set that contains hand-coded syllables and their well-formedness. We define a fine-grained feature set based on the hand-coded syllables and train a logistic regression classifier on labeled syllables, expecting to find the discriminative features from the trained classifier. We also perform pseudo active learning to investigate the applicability of active learning in analyzing syllables. In the experiments, the best classifier with our combinatorial features achieved the accuracy of 87.0%. The pseudo active learning is also shown to be effective showing that it could reduce about 84% of training instances to achieve the accuracy of 82.0% when compared to the model without active learning.

pdf
Utilizing Visual Forms of Japanese Characters for Neural Review Classification
Yota Toyama | Makoto Miwa | Yutaka Sasaki
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We propose a novel method that exploits visual information of ideograms and logograms in analyzing Japanese review documents. Our method first converts font images of Japanese characters into character embeddings using convolutional neural networks. It then constructs document embeddings from the character embeddings based on Hierarchical Attention Networks, which represent the documents based on attention mechanisms from a character level to a sentence level. The document embeddings are finally used to predict the labels of documents. Our method provides a way to exploit visual features of characters in languages with ideograms and logograms. In the experiments, our method achieved an accuracy comparable to a character embedding-based model while our method has much fewer parameters since it does not need to keep embeddings of thousands of characters.

pdf
TTI-COIN at SemEval-2017 Task 10: Investigating Embeddings for End-to-End Relation Extraction from Scientific Papers
Tomoki Tsujimura | Makoto Miwa | Yutaka Sasaki
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes our TTI-COIN system that participated in SemEval-2017 Task 10. We investigated appropriate embeddings to adapt a neural end-to-end entity and relation extraction system LSTM-ER to this task. We participated in the full task setting of the entity segmentation, entity classification and relation classification (scenario 1) and the setting of relation classification only (scenario 3). The system was directly applied to the scenario 1 without modifying the codes thanks to its generality and flexibility. Our evaluation results show that the choice of appropriate pre-trained embeddings affected the performance significantly. With the best embeddings, our system was ranked third in the scenario 1 with the micro F1 score of 0.38. We also confirm that our system can produce the micro F1 score of 0.48 for the scenario 3 on the test data, and this score is close to the score of the 3rd ranked system in the task.

pdf
Bib2vec: Embedding-based Search System for Bibliographic Information
Takuma Yoneda | Koki Mori | Makoto Miwa | Yutaka Sasaki
Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics

We propose a novel embedding model that represents relationships among several elements in bibliographic information with high representation ability and flexibility. Based on this model, we present a novel search system that shows the relationships among the elements in the ACL Anthology Reference Corpus. The evaluation results show that our model can achieve a high prediction ability and produce reasonable search results.

pdf bib
Extracting Drug-Drug Interactions with Attention CNNs
Masaki Asada | Makoto Miwa | Yutaka Sasaki
BioNLP 2017

We propose a novel attention mechanism for a Convolutional Neural Network (CNN)-based Drug-Drug Interaction (DDI) extraction model. CNNs have been shown to have a great potential on DDI extraction tasks; however, attention mechanisms, which emphasize important words in the sentence of a target-entity pair, have not been investigated with the CNNs despite the fact that attention mechanisms are shown to be effective for a general domain relation classification task. We evaluated our model on the Task 9.2 of the DDIExtraction-2013 shared task. As a result, our attention mechanism improved the performance of our base CNN-based DDI model, and the model achieved an F-score of 69.12%, which is competitive with the state-of-the-art models.

2016

pdf
Distributional Hypernym Generation by Jointly Learning Clusters and Projections
Josuke Yamane | Tomoya Takatani | Hitoshi Yamada | Makoto Miwa | Yutaka Sasaki
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

We propose a novel word embedding-based hypernym generation model that jointly learns clusters of hyponym-hypernym relations, i.e., hypernymy, and projections from hyponym to hypernym embeddings. Most of the recent hypernym detection models focus on a hypernymy classification problem that determines whether a pair of words is in hypernymy or not. These models do not directly deal with a hypernym generation problem in that a model generates hypernyms for a given word. Differently from previous studies, our model jointly learns the clusters and projections with adjusting the number of clusters so that the number of clusters can be determined depending on the learned projections and vice versa. Our model also boosts the performance by incorporating inner product-based similarity measures and negative examples, i.e., sampled non-hypernyms, into our objectives in learning. We evaluated our joint learning models on the task of Japanese and English hypernym generation and showed a significant improvement over an existing pipeline model. Our model also compared favorably to existing distributed hypernym detection models on the English hypernym classification task.

2015

pdf
Word Embedding-based Antonym Detection using Thesauri and Distributional Information
Masataka Ono | Makoto Miwa | Yutaka Sasaki
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf
Modeling Joint Entity and Relation Extraction with Table Representation
Makoto Miwa | Yutaka Sasaki
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2009

pdf
Three BioNLP Tools Powered by a Biological Lexicon
Yutaka Sasaki | Paul Thompson | John McNaught | Sophia Ananiadou
Proceedings of the Demonstrations Session at EACL 2009

2008

pdf
How to Make the Most of NE Dictionaries in Statistical NER
Yutaka Sasaki | Yoshimasa Tsuruoka | John McNaught | Sophia Ananiadou
Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing

pdf
Event Frame Extraction Based on a Gene Regulation Corpus
Yutaka Sasaki | Paul Thompson | Philip Cotter | John McNaught | Sophia Ananiadou
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2005

pdf
Empirical Study of Utilizing Morph-Syntactic Information in SMT
Young-Sook Hwang | Taro Watanabe | Yutaka Sasaki
Second International Joint Conference on Natural Language Processing: Full Papers

pdf
Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA
Yutaka Sasaki
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf
Context-Dependent SMT Model using Bilingual Verb-Noun Collocation
Young-Sook Hwang | Yutaka Sasaki
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf
Bilingual Knowledge Extraction Using Chunk Alignment
Young-Sook Hwang | Kyonghee Paik | Yutaka Sasaki
Proceedings of the 18th Pacific Asia Conference on Language, Information and Computation

2003

pdf
Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data
Jun Suzuki | Tsutomu Hirao | Yutaka Sasaki | Eisaku Maeda
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf
Spoken Interactive ODQA System: SPIQA
Chiori Hori | Takaaki Hori | Hajime Tsukada | Hideki Isozaki | Yutaka Sasaki | Eisaku Maeda
The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics

pdf
Question Classification using HDAG Kernel
Jun Suzuki | Hirotoshi Taira | Yutaka Sasaki | Eisaku Maeda
Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering

2002

pdf
SVM Answer Selection for Open-Domain Question Answering
Jun Suzuki | Yutaka Sasaki | Eisaku Maeda
COLING 2002: The 19th International Conference on Computational Linguistics

2000

pdf
Learning Semantic-Level Information Extraction Rules by Type-Oriented ILP
Yutaka Sasaki | Yoshihiro Matsuo
COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics