pdf
bib
Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)
Maria Lomeli
|
Swabha Swayamdipta
|
Rui Zhang
pdf
bib
abs
Creative Planning with Language Models: Practice, Evaluation and Applications
Alexander Spangher
|
Tenghao Huang
|
Philippe Laban
|
Nanyun Peng
The use of large language models (LLMs) in human-centered creative tasks — such as journalism, scientific writing, and storytelling — has showcased their potential for content generation but highlighted a critical gap: planning. Planning, used here to describe the “actions” humans perform before (and during) the writing process, is a fundamental process in many creative domains. This tutorial explores how planning has been learned and deployed in creative workflows, unifying three scenarios: Full Data Regimens (when observational data for actions and the resulting text exist), Partial (when text exists but actions can be inferred) and Low (when neither exist). The tutorial discusses forward and backward learning approaches for planning in LLMs, evaluation metrics tailored to latent plans, and practical applications in computational journalism, web agents, and other creative domains. By bridging theoretical concepts and practical demonstrations, this tutorial aims to inspire new research directions in leveraging LLMs for creative and goal-oriented planning tasks.
pdf
bib
abs
DAMAGeR: Deploying Automatic and Manual Approaches to GenAI Red-teaming
Manish Nagireddy
|
Michael Feffer
|
Ioana Baldini
In this tutorial, we will review and apply current automatic and manual red-teaming techniques for GenAI models(including LLMs and multimodal models). In doing so, we aim to emphasize the importance of using a mixture of techniques and establishing a balance between automatic and manual approaches. Lastly, we aim to engage tutorial participants in live red-teaming activities to collaboratively learn impactful red-teaming strategies and share insights.
pdf
bib
abs
Foundation Models Meet Embodied Agents
Manling Li
|
Yunzhu Li
|
Jiayuan Mao
|
Wenlong Huang
This tutorial will present a systematic overview of recent advances in foundation models for embodied agents, covering three types of foundation models based on input and output: Large Language Models (LLMs), Vision-Language Models (VLMs), Vision-Language-Action Models (VLAs)
pdf
bib
abs
Knowledge Distillation for Language Models
Yuqiao Wen
|
Freda Shi
|
Lili Mou
Knowledge distillation (KD) aims to transfer the knowledge of a teacher (usually a large model) to a student (usually a small one). In this tutorial, our goal is to provide participants with a comprehensive understanding of the techniques and applications of KD for language models. After introducing the basic concepts including intermediate-layer matching and prediction matching, we will present advanced techniques such as reinforcement learning-based KD and multi-teacher distillation. For applications, we will focus on KD for large language models (LLMs), covering topics ranging from LLM sequence compression to LLM self-distillation. The target audience is expected to know the basics of machine learning and NLP, but do not have to be familiar with the details of math derivation and neural models
pdf
bib
abs
Adaptation of Large Language Models
Zixuan Ke
|
Yifei Ming
|
Shafiq Joty
This tutorial on adaptation of Large Language Models (LLMs) is designed to address the growing demand for models that go beyond the static capabilities of generic LLMs by providing an overview of dynamic, domain-specific, and task-adaptive LLM adaptation techniques. While general LLMs have demonstrated strong generalization across a variety of tasks, they often struggle to perform well in specialized domains such as finance, healthcare, and code generation for underrepresented languages. Additionally, their static nature limits their ability to evolve with the changing world, and they are often extremely large in size, making them impractical and costly to deploy at scale. As a result, the adaptation of LLMs has drawn much attention since the birth of LLMs and is of core importance, both for industry, which focuses on serving its targeted users, and academia, which can greatly benefit from small but powerful LLMs
pdf
bib
abs
Learning Language through Grounding
Freda Shi
|
Ziqiao Ma
|
Jiayuan Mao
|
Parisa Kordjamshidi
|
Joyce Chai
Grounding has been a long-standing concept in natural language processing (NLP) and computational linguistics (CL). This tutorial provides a historical overview and introduces recent advances in learning language through grounding, with a particular emphasis on the latter. We will begin by tracing the history of grounding and presenting a unified perspective on the term. In Parts II to IV, we will delve into recent progress in learning lexical semantics, syntax, and complex meanings through various forms of grounding. We will conclude by discussing future directions and open challenges, particularly those related to the growing trend of large language models and scaling.
pdf
bib
abs
LLMs and Copyright Risks: Benchmarks and Mitigation Approaches
Denghui Zhang
|
Zhaozhuo Xu
|
Weijie Zhao
Large Language Models (LLMs) have revolutionized natural language processing, but their widespread use has raised significant copyright concerns. This tutorial addresses the complex intersection of LLMs and copyright law, providing researchers and practitioners with essential knowledge and tools to navigate this challenging landscape. The tutorial begins with an overview of relevant copyright principles and their application to AI, followed by an examination of specific copyright issues in LLM development and deployment. A key focus will be on technical approaches to copyright risk assessment and mitigation in LLMs. We will introduce benchmarks for evaluating copyright-related risks, including memorization detection and probing techniques. The tutorial will then cover practical mitigation strategies, such as machine unlearning, efficient fine-tuning methods, and alignment approaches to reduce copyright infringement risks. Ethical considerations and future directions in copyright-aware AI development will also be discussed.
pdf
bib
abs
Social Intelligence in the Age of LLMs
Hao Zhu
|
Bodhisattwa Prasad Majumder
|
Dirk Hovy
|
Diyi Yang
With the emergence of Large Language Models (LLMs), we now have unprecedented opportunities to incorporate human-like communication and context-aware interactions into artificial systems. But what is the current state of LLMs’ capability of social interaction? Can they truly understand social scenarios, perform social reasoning, or interact with humans as socially competent agents? We propose this tutorial as an introduction to and an overview of different aspects of artificial social intelligence and their relationship with LLMs. In this tutorial, we will explore these questions by introducing scientific methods for evaluating social intelligence in LLMs, highlighting the key challenges, and identifying promising research directions. Participants will not only gain a comprehensive overview of the field’s progress, but also acquire technical skills on analysing and developing LLM-based social intelligence.