Jiabao Ji
Also published as: JiaBao Ji
2022
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
|
Yoon Kim
|
James Glass
|
Tianxing He
Findings of the Association for Computational Linguistics: ACL 2022
The finetuning of pretrained transformer-based language generation models are typically conducted in an end-to-end manner, where the model learns to attend to relevant parts of the input by itself. However, there does not exist a mechanism to directly control the model’s focus. This work aims to develop a control mechanism by which a user can select spans of context as “highlights” for the model to focus on, and generate relevant output. To achieve this goal, we augment a pretrained model with trainable “focus vectors” that are directly applied to the model’s embeddings, while the model itself is kept fixed. These vectors, trained on automatic annotations derived from attribution methods, act as indicators for context importance. We test our approach on two core generation tasks: dialogue response generation and abstractive summarization. We also collect evaluation data where the highlight-generation pairs are annotated by humans. Our experiments show that the trained focus vectors are effective in steering the model to generate outputs that are relevant to user-selected highlights.
2021
WebSRC: A Dataset for Web-Based Structural Reading Comprehension
Xingyu Chen
|
Zihan Zhao
|
Lu Chen
|
JiaBao Ji
|
Danyang Zhang
|
Ao Luo
|
Yuxuan Xiong
|
Kai Yu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Web search is an essential way for humans to obtain information, but it’s still a great challenge for machines to understand the contents of web pages. In this paper, we introduce the task of web-based structural reading comprehension. Given a web page and a question about it, the task is to find an answer from the web page. This task requires a system not only to understand the semantics of texts but also the structure of the web page. Moreover, we proposed WebSRC, a novel Web-based Structural Reading Comprehension dataset. WebSRC consists of 400K question-answer pairs, which are collected from 6.4K web pages with corresponding HTML source code, screenshots, and metadata. Each question in WebSRC requires a certain structural understanding of a web page to answer, and the answer is either a text span on the web page or yes/no. We evaluate various strong baselines on our dataset to show the difficulty of our task. We also investigate the usefulness of structural information and visual features. Our dataset and baselines have been publicly available.
Search
Co-authors
- Yoon Kim 1
- James Glass 1
- Tianxing He 1
- Xingyu Chen 1
- Zihan Zhao 1
- show all...