Haebin Shin
2024
KTRL+F: Knowledge-Augmented In-Document Search
Hanseok Oh
|
Haebin Shin
|
Miyoung Ko
|
Hyunji Lee
|
Minjoon Seo
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We introduce a new problem KTRL+F, a knowledge-augmented in-document search that necessitates real-time identification of all semantic targets within a document with the awareness of external sources through a single natural query. KTRL+F addresses following unique challenges for in-document search: 1) utilizing knowledge outside the document for extended use of additional information about targets, and 2) balancing between real-time applicability with the performance.We analyze various baselines in KTRL+F and find limitations of existing models, such as hallucinations, high latency, or difficulties in leveraging external knowledge. Therefore, we propose a Knowledge-Augmented Phrase Retrieval model that shows a promising balance between speed and performance by simply augmenting external knowledge in phrase embedding. We also conduct a user study to verify whether solving KTRL+F can enhance search experience for users. It demonstrates that even with our simple model, users can reduce the time for searching with less queries and reduced extra visits to other sources for collecting evidence. We encourage the research community to work on KTRL+F to enhance more efficient in-document information access.
2022
Learning to Embed Multi-Modal Contexts for Situated Conversational Agents
Haeju Lee
|
Oh Joon Kwon
|
Yunseon Choi
|
Minho Park
|
Ran Han
|
Yoonhyung Kim
|
Jinhyeon Kim
|
Youngjune Lee
|
Haebin Shin
|
Kangwook Lee
|
Kee-Eung Kim
Findings of the Association for Computational Linguistics: NAACL 2022
The Situated Interactive Multi-Modal Conversations (SIMMC) 2.0 aims to create virtual shopping assistants that can accept complex multi-modal inputs, i.e. visual appearances of objects and user utterances. It consists of four subtasks, multi-modal disambiguation (MM-Disamb), multi-modal coreference resolution (MM-Coref), multi-modal dialog state tracking (MM-DST), and response retrieval and generation. While many task-oriented dialog systems usually tackle each subtask separately, we propose a jointly learned multi-modal encoder-decoder that incorporates visual inputs and performs all four subtasks at once for efficiency. This approach won the MM-Coref and response retrieval subtasks and nominated runner-up for the remaining subtasks using a single unified model at the 10th Dialog Systems Technology Challenge (DSTC10), setting a high bar for the novel task of multi-modal task-oriented dialog systems.
Search
Co-authors
- Haeju Lee 1
- Oh Joon Kwon 1
- Yunseon Choi 1
- Minho Park 1
- Ran Han 1
- show all...