2021
pdf
abs
Relation Extraction Using Multiple Pre-Training Models in Biomedical Domain
Satoshi Hiai
|
Kazutaka Shimada
|
Taiki Watanabe
|
Akiva Miura
|
Tomoya Iwakura
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
The number of biomedical documents is increasing rapidly. Accordingly, a demand for extracting knowledge from large-scale biomedical texts is also increasing. BERT-based models are known for their high performance in various tasks. However, it is often computationally expensive. A high-end GPU environment is not available in many situations. To attain both high accuracy and fast extraction speed, we propose combinations of simpler pre-trained models. Our method outperforms the latest state-of-the-art model and BERT-based models on the GAD corpus. In addition, our method shows approximately three times faster extraction speed than the BERT-based models on the ChemProt corpus and reduces the memory size to one sixth of the BERT ones.
2017
pdf
Tree as a Pivot: Syntactic Matching Methods in Pivot Translation
Akiva Miura
|
Graham Neubig
|
Katsuhito Sudoh
|
Satoshi Nakamura
Proceedings of the Second Conference on Machine Translation
2016
pdf
abs
Residual Stacking of RNNs for Neural Machine Translation
Raphael Shu
|
Akiva Miura
Proceedings of the 3rd Workshop on Asian Translation (WAT2016)
To enhance Neural Machine Translation models, several obvious ways such as enlarging the hidden size of recurrent layers and stacking multiple layers of RNN can be considered. Surprisingly, we observe that using naively stacked RNNs in the decoder slows down the training and leads to degradation in performance. In this paper, We demonstrate that applying residual connections in the depth of stacked RNNs can help the optimization, which is referred to as residual stacking. In empirical evaluation, residual stacking of decoder RNNs gives superior results compared to other methods of enhancing the model with a fixed parameter budget. Our submitted systems in WAT2016 are based on a NMT model ensemble with residual stacking in the decoder. To further improve the performance, we also attempt various methods of system combination in our experiments.
pdf
Selecting Syntactic, Non-redundant Segments in Active Learning for Machine Translation
Akiva Miura
|
Graham Neubig
|
Michael Paul
|
Satoshi Nakamura
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2015
pdf
Improving Pivot Translation by Remembering the Pivot
Akiva Miura
|
Graham Neubig
|
Sakriani Sakti
|
Tomoki Toda
|
Satoshi Nakamura
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)