Utkarsh Saxena
2025
CourtNav: Voice-Guided, Anchor-Accurate Navigation of Long Legal Documents in Courtrooms
Sai Khadloya
|
Kush Juvekar
|
Arghya Bhattacharya
|
Utkarsh Saxena
Proceedings of the Natural Legal Language Processing Workshop 2025
Judicial work depends on close reading of longrecords, charge sheets, pleadings, annexures,orders, often spanning hundreds of pages. Withlimited staff support, exhaustive reading duringhearings is impractical. We present CourtNav,a voice-guided, anchor-first navigator for legalPDFs that maps a judge’s spoken command(e.g., “go to paragraph 23”, “highlight the contradiction in the cross-examination”) directlyto a highlighted paragraph in seconds. CourtNav transcribes the command, classifies intentwith a grammar-first, LLM-backed router, retrieves over a layout-aware hybrid index, andauto-scrolls the viewer to the cited span whilehighlighting it and close alternates. By design, the interface shows only grounded pas-sages, never free text, keeping evidence verifiable and auditable. This need is acute in India, where judgments and cross-examinations notoriously long.In a pilot on representative charge sheets, pleadings, and orders, median time-to-relevance drops from 3–5 minutes (manual navigation) to 10–15 seconds;with quick visual verification included, 30–45seconds. Under fixed time budgets, thisnavigation-first design increases the breadth ofthe record actually consulted while preservingcontrol and transparency
Are LLMs Court-Ready? Evaluating Frontier Models on Indian Legal Reasoning
Kush Juvekar
|
Arghya Bhattacharya
|
Sai Khadloya
|
Utkarsh Saxena
Proceedings of the Natural Legal Language Processing Workshop 2025
Large language models (LLMs) are moving into legal workflows, yet we lack a jurisdiction-grounded way to gauge their basic competence in thereof. We use India’s public legal examinations as a transparent proxy. Our multi-year benchmark assembles objective screens from top national and state exams and evaluates open and frontier LLMs under real world exam conditions. To probe beyond MCQs, we also include a lawyer-graded, paired-blinded study of long-form answers from the Supreme Court’s Advocate-on-Record exam. This is, to our knowledge, the first exam-grounded, India-specific yardstick for LLM court-readiness released with datasets and protocols. Our work shows that while frontier systems consistently clear historical cutoffs and often match or exceed recent top-scorer bands on objective exams, none surpasses the human topper on long-form reasoning. Grader notes converge on three reliability failure modes—procedural/format compliance, authority/citation discipline, and forum-appropriate voice/structure. These findings delineate where LLMs can assist (checks, cross-statute consistency, statute and precedent lookups) and where human leadership remains essential: forum-specific drafting and filing, procedural and relief strategy, reconciling authorities and exceptions, and ethical, accountable judgment.
2024
Eigen Attention: Attention in Low-Rank Space for KV Cache Compression
Utkarsh Saxena
|
Gobinda Saha
|
Sakshi Choudhary
|
Kaushik Roy
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) represent a groundbreaking advancement in the domain of natural language processing due to their impressive reasoning abilities. Recently, there has been considerable interest in increasing the context lengths for these models to enhance their applicability to complex tasks. However, at long context lengths and large batch sizes, the key-value (KV) cache, which stores the attention keys and values, emerges as the new bottleneck in memory usage during inference. To address this, we propose Eigen Attention, which performs the attention operation in a low-rank space, thereby reducing the KV cache memory overhead. Our proposed approach is orthogonal to existing KV cache compression techniques and can be used synergistically with them. Through extensive experiments over OPT, MPT, and Llama model families, we demonstrate that Eigen Attention results in up to 40% reduction in KV cache sizes and up to 60% reduction in attention operation latency with minimal drop in performance.
Search
Fix author
Co-authors
- Arghya Bhattacharya 2
- Kush Juvekar 2
- Sai Khadloya 2
- Sakshi Choudhary 1
- Kaushik Roy 1
- show all...