Daniel Biś
2022
PAIGE: Personalized Adaptive Interactions Graph Encoder for Query Rewriting in Dialogue Systems
Daniel Biś
|
Saurabh Gupta
|
Jie Hao
|
Xing Fan
|
Chenlei Guo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Unexpected responses or repeated clarification questions from conversational agents detract from the users’ experience with technology meant to streamline their daily tasks. To reduce these frictions, Query Rewriting (QR) techniques replace transcripts of faulty queries with alternatives that lead to responses thatsatisfy the users’ needs. Despite their successes, existing QR approaches are limited in their ability to fix queries that require considering users’ personal preferences.We improve QR by proposing Personalized Adaptive Interactions Graph Encoder (PAIGE).PAIGE is the first QR architecture that jointly models user’s affinities and query semantics end-to-end.The core idea is to represent previous user-agent interactions and world knowledge in a structured form — a heterogeneous graph — and apply message passing to propagate latent representations of users’ affinities to refine utterance embeddings.Using these embeddings, PAIGE can potentially provide different rewrites given the same query for users with different preferences. Our model, trained without any human-annotated data, improves the rewrite retrieval precision of state-of-the-art baselines by 12.5–17.5% while having nearly ten times fewer parameters.
2021
Too Much in Common: Shifting of Embeddings in Transformer Language Models and its Implications
Daniel Biś
|
Maksim Podkorytov
|
Xiuwen Liu
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models. We resolve this by showing, contrary to previous studies, that the representations do not occupy a narrow cone, but rather drift in common directions. At any training step, all of the embeddings except for the ground-truth target embedding are updated with gradient in the same direction. Compounded over the training set, the embeddings drift and share common components, manifested in their shape in all the models we have empirically tested. Our experiments show that isotropy can be restored using a simple transformation.
Search
Co-authors
- Saurabh Gupta 1
- Jie Hao 1
- Xing Fan 1
- Chenlei Guo 1
- Maksim Podkorytov 1
- show all...