Noah Fiedel
2023
Understanding HTML with Large Language Models
Izzeddin Gur
|
Ofir Nachum
|
Yingjie Miao
|
Mustafa Safdari
|
Austin Huang
|
Aakanksha Chowdhery
|
Sharan Narang
|
Noah Fiedel
|
Aleksandra Faust
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding – i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval – have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks using 192x less data compared to the previous best supervised model. We create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl
2021
Do Transformer Modifications Transfer Across Implementations and Applications?
Sharan Narang
|
Hyung Won Chung
|
Yi Tay
|
Liam Fedus
|
Thibault Fevry
|
Michael Matena
|
Karishma Malkan
|
Noah Fiedel
|
Noam Shazeer
|
Zhenzhong Lan
|
Yanqi Zhou
|
Wei Li
|
Nan Ding
|
Jake Marcus
|
Adam Roberts
|
Colin Raffel
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.
Search
Co-authors
- Sharan Narang 2
- Izzeddin Gür 1
- Ofir Nachum 1
- Yingjie Miao 1
- Mustafa Safdari 1
- show all...