Zekun Zhao


2026

The potential of AI conversational agents to foster student learning and reduce teacher strain in classroom settings has made the development of pedagogical agents a prime research target. An effective AI agent in particular must be able to understand both student language and the content they are learning and, furthermore, map between them. Curricular terminology and student speech, though topically and semantically related, differ significantly in surface-form expression. We present the JIA-AMRs Collection, a new resource for exploring whether Abstract Meaning Representations (AMRs) can optimize interventions by a conversational AI agent in a middle-school classroom by providing structured semantic representations of classroom language. This resource also provides an avenue by which we can verify interventions by the agent. We discuss the challenges of creating a corpus of meaning representations that map across highly-dissimilar classroom data (multimedia curriculum, student spoken language, and student written language) and our promising results of a nearly 30-point gain in trained-parser performance over the off-the-shelf model.

2025

In this paper, we present LiDARR (**Li**nking **D**ocument **A**MRs with **R**eferents **R**esolvers), a web tool for semantic annotation at the document level using the formalism of Abstract Meaning Representation (AMR). LiDARR streamlines the creation of comprehensive knowledge graphs from natural language documents through semantic annotation. The tool features a visualization and interactive user interface, transforming document-level AMR annotation into an models-facilitated verification process. This is achieved through the integration of an AMR-to-surface alignment model and a coreference resolution model. Additionally, we incorporate PropBank rolesets into LiDARR to extend implicit roles in annotated AMR, allowing implicit roles to be linked through the coreference chains via AMRs.