Christopher Caruso

Also published as: Chris Caruso


WeCanTalk: A New Multi-language, Multi-modal Resource for Speaker Recognition
Karen Jones | Kevin Walker | Christopher Caruso | Jonathan Wright | Stephanie Strassel
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The WeCanTalk (WCT) Corpus is a new multi-language, multi-modal resource for speaker recognition. The corpus contains Cantonese, Mandarin and English telephony and video speech data from over 200 multilingual speakers located in Hong Kong. Each speaker contributed at least 10 telephone conversations of 8-10 minutes’ duration collected via a custom telephone platform based in Hong Kong. Speakers also uploaded at least 3 videos in which they were both speaking and visible, along with one selfie image. At least half of the calls and videos for each speaker were in Cantonese, while their remaining recordings featured one or more different languages. Both calls and videos were made in a variety of noise conditions. All speech and video recordings were audited by experienced multilingual annotators for quality including presence of the expected language and for speaker identity. The WeCanTalk Corpus has been used to support the NIST 2021 Speaker Recognition Evaluation and will be published in the LDC catalog.


The SAFE-T Corpus: A New Resource for Simulated Public Safety Communications
Dana Delgado | Kevin Walker | Stephanie Strassel | Karen Jones | Christopher Caruso | David Graff
Proceedings of the Twelfth Language Resources and Evaluation Conference

We introduce a new resource, the SAFE-T (Speech Analysis for Emergency Response Technology) Corpus, designed to simulate first-responder communications by inducing high vocal effort and urgent speech with situational background noise in a game-based collection protocol. Linguistic Data Consortium developed the SAFE-T Corpus to support the NIST (National Institute of Standards and Technology) OpenSAT (Speech Analytic Technologies) evaluation series, whose goal is to advance speech analytic technologies including automatic speech recognition, speech activity detection and keyword search in multiple domains including simulated public safety communications data. The corpus comprises over 300 hours of audio from 115 unique speakers engaged in a collaborative problem-solving activity representative of public safety communications in terms of speech content, noise types and noise levels. Portions of the corpus have been used in the OpenSAT 2019 evaluation and the full corpus will be published in the LDC catalog. We describe the design and implementation of the SAFE-T Corpus collection, discuss the approach of capturing spontaneous speech from study participants through game-based speech collection, and report on the collection results including several challenges associated with the collection.


Cross-Document, Cross-Language Event Coreference Annotation Using Event Hoppers
Zhiyi Song | Ann Bies | Justin Mott | Xuansong Li | Stephanie Strassel | Christopher Caruso
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)


Creating HAVIC: Heterogeneous Audio Visual Internet Collection
Stephanie Strassel | Amanda Morris | Jonathan Fiscus | Christopher Caruso | Haejoong Lee | Paul Over | James Fiumara | Barbara Shaw | Brian Antonishek | Martial Michel
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Linguistic Data Consortium and the National Institute of Standards and Technology are collaborating to create a large, heterogeneous annotated multimodal corpus to support research in multimodal event detection and related technologies. The HAVIC (Heterogeneous Audio Visual Internet Collection) Corpus will ultimately consist of several thousands of hours of unconstrained user-generated multimedia content. HAVIC has been designed with an eye toward providing increased challenges for both acoustic and video processing technologies, focusing on multi-dimensional variation inherent in user-generated multimedia content. To date the HAVIC corpus has been used to support the NIST 2010 and 2011 TRECVID Multimedia Event Detection (MED) Evaluations. Portions of the corpus are expected to be released in LDC's catalog in the coming year, with the remaining segments being published over time after their use in the ongoing MED evaluations.


Large Scale Multilingual Broadcast Data Collection to Support Machine Translation and Distillation Technology Development
Kevin Walker | Christopher Caruso | Denise DiPersio
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The development of technologies to address machine translation and distillation of multilingual broadcast data depends heavily on the collection of large volumes of material from modern data providers. To address the needs of GALE researchers, the Linguistic Data Consortium (LDC) developed a system for collecting broadcast news and conversation from a variety of Arabic, Chinese and English broadcasters. The system is highly automated, easily extensible and robust and is capable of collecting, processing and evaluating hundreds of hours of content from several dozen sources per day. In addition to this extensive system, LDC manages three remote collection sites to maximize the variety of available broadcast data and has designed a portable broadcast collection platform to facilitate remote collection. This paper will present a detailed a description of the design and implementation of LDC’s collection system, the technical challenges and solutions to large scale broadcast data collection efforts and an overview of the system’s operation. This paper will also discuss the challenges of managing remote collections, in particular, the strategies used to normalize data formats, naming conventions and delivery methods to achieve optimal integration of remotely-collected data into LDC’s collection database and downstream tasking workflow.

Greybeard Longitudinal Speech Study
Linda Brandschain | David Graff | Christopher Cieri | Kevin Walker | Chris Caruso | Abby Neely
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The Greybeard Project was designed so as to enable research in speaker recognition using data that have been collected over a long period of time. Since 1994, LDC has been collecting speech samples for use in research and evaluations. By mining our earlier collections we assembled a list of subjects who had participated in multiple studies. These participants were then contacted and asked to take part in the Greybeard Project. The only constraint was that the participants must have made numerous calls in prior studies and the calls had to be a minimum of two years old. The archived data was sorted by participant and subsequent calls were added to their files. This is the first longitudinal study of its kind. The resulting corpus contains multiple calls for each participant that span anywhere from two to 12 years in time. It is our hope that these data will enable speaker recognition researchers to explore the effects of aging on voice.

Mixer 6
Linda Brandschain | David Graff | Chris Cieri | Kevin Walker | Chris Caruso | Abby Neely
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Linguistic Data Consortium’s Human Subjects Data Collection lab conducts multi-modal speech collections to develop corpora for use in speech, speaker and language research and evaluations. The Mixer collections have evolved over the years to best accommodate the ever changing needs of the research community and to hopefully keep one step ahead by providing increasingly challenging data. Over the years Mixer collections have grown to include socio-linguistic interviews, a wide variety of telephone conditions and multiple languages, recording conditions, channels and speech acts.. Mixer 6 was the most recent collection. This paper describes the Mixer 6 Phase 1 project. Mixer 6 Phase 1 was a study supporting linguistic research, technology development and education. The object of this study was to record speech in a variety of situations that vary formality and model multiple naturally occurring interactions as well as a variety of channel conditions