Ryan Lagasse


2025

pdf bib
Mitigating Forgetting in Continual Learning with Selective Gradient Projection
Anika Singh | David Martinez | Aayush Dhaulakhandi | Varun Chopade | Likhith Malipati | Vasu Sharma | Kevin Zhu | Sunishchal Dev | Ryan Lagasse
The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

As neural networks are increasingly deployed in dynamic environments, they face the challenge of catastrophic forgetting, the tendency to overwrite previously learned knowledge when adapting to new tasks, resulting in severe performance degradation on earlier tasks. We propose Selective Forgetting-Aware Optimization (SFAO), a dynamic method that regulates gradient directions via cosine similarity and per-layer gating, enabling controlled forgetting while balancing plasticity and stability. SFAO selectively projects, accepts, or discards updates using a tunable mechanism with efficient Monte Carlo approximation. Experiments on standard continual learning benchmarks show that SFAO achieves competitive accuracy with markedly lower memory cost, a 90% reduction, and improved forgetting on MNIST datasets, making it suitable for resource-constrained scenarios.

pdf bib
HalluTree: Explainable Multi-Hop Hallucination Detection for Abstractive Summarization
Daniel Orshansky | Oskar Oomen | Naaisha Agarwal | Ryan Lagasse
Proceedings of The 5th New Frontiers in Summarization Workshop

Black-box verifiers for abstractive summaries often struggle with complex claims that require multi-hop reasoning, and they typically provide a single verdict without an interpretable rationale. As a result, it becomes difficult to understand or audit their failures. We address this with HalluTree, a framework that models verification as an interpretable claim tree. HalluTree first decomposes summaries into subclaims, classifying each into two types – extractive (directly verifiable against evidence) or inferential (requiring reasoning) – which follow distinct verification paths. Extractive claims are robustly verified against evidence using an ensemble of lightweight NLI models. Crucially, inferential claims trigger a process that generates a natural program – an explicit reasoning chain that integrates supporting evidence and logical steps – which is then executed to determine the claim’s validity. Evaluation on the LLM-AggreFact benchmark demonstrates HalluTree’s effectiveness: it achieves performance competitive with top-tier black-box models, including Bespoke-MiniCheck, while providing transparent and auditable reasoning programs for every inferential judgment. This combination of competitive accuracy and high interpretability offers a significant advance over opaque, single-classification verifiers. We will publically release code, data, prompts, and other artifacts upon acceptance.