This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
YuguangWang
Fixing paper assignments
Please select all papers that belong to the same person.
Indicate below which author they should be assigned to.
Protein engineering is important for various biomedical applications, but traditional approaches are often inefficient and resource-intensive. While deep learning (DL) models have shown promise, their implementation remains challenging for biologists without specialized computational expertise. To address this gap, we propose AutoProteinEngine (AutoPE), an innovative agent framework that leverages large language models (LLMs) for multimodal automated machine learning (AutoML) in protein engineering. AutoPE introduces a conversational interface that allows biologists without DL backgrounds to interact with DL models using natural language, lowering the entry barrier for protein engineering tasks. Our AutoPE uniquely integrates LLMs with AutoML to handle both protein sequence and graph modalities, automate hyperparameter optimization, and facilitate data retrieval from protein databases. We evaluated AutoPE through two real-world protein engineering tasks, demonstrating substantial improvements in model performance compared to traditional zero-shot and manual fine-tuning approaches. By bridging the gap between DL and biologists’ domain expertise, AutoPE empowers researchers to leverage advanced computational tools without extensive programming knowledge.
This paper describes our speech translation system for the IWSLT 2018 Speech Translation of lectures and TED talks from English to German task. The pipeline approach is employed in our work, which mainly includes the Automatic Speech Recognition (ASR) system, a post-processing module, and the Neural Machine Translation (NMT) system. Our ASR system is an ensemble system of Deep-CNN, BLSTM, TDNN, N-gram Language model with lattice rescoring. We report average results on tst2013, tst2014, tst2015. Our best combination system has an average WER of 6.73. The machine translation system is based on Google’s Transformer architecture. We achieved an improvement of 3.6 BLEU over baseline system by applying several techniques, such as cleaning parallel corpus, fine tuning of single model, ensemble models and re-scoring with additional features. Our final average result on speech translation is 31.02 BLEU.