Kevin Ros
2022
Translation between Molecules and Natural Language
Carl Edwards
|
Tuan Lai
|
Kevin Ros
|
Garrett Honke
|
Kyunghyun Cho
|
Heng Ji
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
We present MolT5 - a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings. MolT5 allows for new, useful, and challenging analogs of traditional vision-language tasks, such as molecule captioning and text-based de novo molecule generation (altogether: translation between molecules and language), which we explore for the first time. Since MolT5 pretrains models on single-modal data, it helps overcome the chemistry domain shortcoming of data scarcity. Furthermore, we consider several metrics, including a new cross-modal embedding-based metric, to evaluate the tasks of molecule captioning and text-based molecule generation. Our results show that MolT5-based models are able to generate outputs, both molecules and captions, which in many cases are high quality.
Generation of Student Questions for Inquiry-based Learning
Kevin Ros
|
Maxwell Jong
|
Chak Ho Chan
|
ChengXiang Zhai
Proceedings of the 15th International Conference on Natural Language Generation
Search
Co-authors
- Carl Edwards 1
- Tuan Lai 1
- Garrett Honke 1
- Kyunghyun Cho 1
- Heng Ji 1
- show all...