Natsuda Laokulrat


2018

pdf
Incorporating Semantic Attention in Video Description Generation
Natsuda Laokulrat | Naoaki Okazaki | Hideki Nakayama
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf
Generating Video Description using Sequence-to-sequence Model with Temporal Attention
Natsuda Laokulrat | Sang Phan | Noriki Nishida | Raphael Shu | Yo Ehara | Naoaki Okazaki | Yusuke Miyao | Hideki Nakayama
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Automatic video description generation has recently been getting attention after rapid advancement in image caption generation. Automatically generating description for a video is more challenging than for an image due to its temporal dynamics of frames. Most of the work relied on Recurrent Neural Network (RNN) and recently attentional mechanisms have also been applied to make the model learn to focus on some frames of the video while generating each word in a describing sentence. In this paper, we focus on a sequence-to-sequence approach with temporal attention mechanism. We analyze and compare the results from different attention model configuration. By applying the temporal attention mechanism to the system, we can achieve a METEOR score of 0.310 on Microsoft Video Description dataset, which outperformed the state-of-the-art system so far.

2014

pdf bib
Exploiting Timegraphs in Temporal Relation Classification
Natsuda Laokulrat | Makoto Miwa | Yoshimasa Tsuruoka
Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Processing

2013

pdf
UTTime: Temporal Relation Classification using Deep Syntactic Features
Natsuda Laokulrat | Makoto Miwa | Yoshimasa Tsuruoka | Takashi Chikayama
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)