Akira Ozaki


2008

pdf
In-car Speech Data Collection along with Various Multimodal Signals
Akira Ozaki | Sunao Hara | Takashi Kusakawa | Chiyomi Miyajima | Takanori Nishino | Norihide Kitaoka | Katunobu Itou | Kazuya Takeda
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

In this paper, a large-scale real-world speech database is introduced along with other multimedia driving data. We designed a data collection vehicle equipped with various sensors to synchronously record twelve-channel speech, three-channel video, driving behavior including gas and brake pedal pressures, steering angles, and vehicle velocities, physiological signals including driver heart rate, skin conductance, and emotion-based sweating on the palms and soles, etc. These multimodal data are collected while driving on city streets and expressways under four different driving task conditions including two kinds of monologues, human-human dialog, and human-machine dialog. We investigated the response timing of drivers against navigator utterances and found that most overlapped with the preceding utterance due to the task characteristics and the features of Japanese. When comparing utterance length, speaking rate, and the filler rate of driver utterances in human-human and human-machine dialogs, we found that drivers tended to use longer and faster utterances with more fillers to talk with humans than machines.