Christopher Chu
2021
MeetDot: Videoconferencing with Live Translation Captions
Arkady Arkhangorodsky
|
Christopher Chu
|
Scot Fang
|
Yiqi Huang
|
Denglin Jiang
|
Ajay Nagesh
|
Boliang Zhang
|
Kevin Knight
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We present MeetDot, a videoconferencing system with live translation captions overlaid on screen. The system aims to facilitate conversation between people who speak different languages, thereby reducing communication barriers between multilingual participants. Currently, our system supports speech and captions in 4 languages and combines automatic speech recognition (ASR) and machine translation (MT) in a cascade. We use the re-translation strategy to translate the streamed speech, resulting in caption flicker. Additionally, our system has very strict latency requirements to have acceptable call quality. We implement several features to enhance user experience and reduce their cognitive load, such as smooth scrolling captions and reducing caption flicker. The modular architecture allows us to integrate different ASR and MT services in our backend. Our system provides an integrated evaluation suite to optimize key intrinsic evaluation metrics such as accuracy, latency and erasure. Finally, we present an innovative cross-lingual word-guessing game as an extrinsic evaluation metric to measure end-to-end system performance. We plan to make our system open-source for research purposes.
2020
Learning to Pronounce Chinese Without a Pronunciation Dictionary
Christopher Chu
|
Scot Fang
|
Kevin Knight
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
We demonstrate a program that learns to pronounce Chinese text in Mandarin, without a pronunciation dictionary. From non-parallel streams of Chinese characters and Chinese pinyin syllables, it establishes a many-to-many mapping between characters and pronunciations. Using unsupervised methods, the program effectively deciphers writing into speech. Its token-level character-to-syllable accuracy is 89%, which significantly exceeds the 22% accuracy of prior work.
Solving Historical Dictionary Codes with a Neural Language Model
Christopher Chu
|
Raphael Valenti
|
Kevin Knight
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly.
Search
Co-authors
- Kevin Knight 3
- Scot Fang 2
- Arkady Arkhangorodsky 1
- Yiqi Huang 1
- Denglin Jiang 1
- show all...