Denglin Jiang
2021
MeetDot: Videoconferencing with Live Translation Captions
Arkady Arkhangorodsky
|
Christopher Chu
|
Scot Fang
|
Yiqi Huang
|
Denglin Jiang
|
Ajay Nagesh
|
Boliang Zhang
|
Kevin Knight
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We present MeetDot, a videoconferencing system with live translation captions overlaid on screen. The system aims to facilitate conversation between people who speak different languages, thereby reducing communication barriers between multilingual participants. Currently, our system supports speech and captions in 4 languages and combines automatic speech recognition (ASR) and machine translation (MT) in a cascade. We use the re-translation strategy to translate the streamed speech, resulting in caption flicker. Additionally, our system has very strict latency requirements to have acceptable call quality. We implement several features to enhance user experience and reduce their cognitive load, such as smooth scrolling captions and reducing caption flicker. The modular architecture allows us to integrate different ASR and MT services in our backend. Our system provides an integrated evaluation suite to optimize key intrinsic evaluation metrics such as accuracy, latency and erasure. Finally, we present an innovative cross-lingual word-guessing game as an extrinsic evaluation metric to measure end-to-end system performance. We plan to make our system open-source for research purposes.
Search
Co-authors
- Arkady Arkhangorodsky 1
- Christopher Chu 1
- Scot Fang 1
- Yiqi Huang 1
- Ajay Nagesh 1
- show all...