Kush Juvekar
2025
CourtNav: Voice-Guided, Anchor-Accurate Navigation of Long Legal Documents in Courtrooms
Sai Khadloya
|
Kush Juvekar
|
Arghya Bhattacharya
|
Utkarsh Saxena
Proceedings of the Natural Legal Language Processing Workshop 2025
Judicial work depends on close reading of longrecords, charge sheets, pleadings, annexures,orders, often spanning hundreds of pages. Withlimited staff support, exhaustive reading duringhearings is impractical. We present CourtNav,a voice-guided, anchor-first navigator for legalPDFs that maps a judge’s spoken command(e.g., “go to paragraph 23”, “highlight the contradiction in the cross-examination”) directlyto a highlighted paragraph in seconds. CourtNav transcribes the command, classifies intentwith a grammar-first, LLM-backed router, retrieves over a layout-aware hybrid index, andauto-scrolls the viewer to the cited span whilehighlighting it and close alternates. By design, the interface shows only grounded pas-sages, never free text, keeping evidence verifiable and auditable. This need is acute in India, where judgments and cross-examinations notoriously long.In a pilot on representative charge sheets, pleadings, and orders, median time-to-relevance drops from 3–5 minutes (manual navigation) to 10–15 seconds;with quick visual verification included, 30–45seconds. Under fixed time budgets, thisnavigation-first design increases the breadth ofthe record actually consulted while preservingcontrol and transparency
Are LLMs Court-Ready? Evaluating Frontier Models on Indian Legal Reasoning
Kush Juvekar
|
Arghya Bhattacharya
|
Sai Khadloya
|
Utkarsh Saxena
Proceedings of the Natural Legal Language Processing Workshop 2025
Large language models (LLMs) are moving into legal workflows, yet we lack a jurisdiction-grounded way to gauge their basic competence in thereof. We use India’s public legal examinations as a transparent proxy. Our multi-year benchmark assembles objective screens from top national and state exams and evaluates open and frontier LLMs under real world exam conditions. To probe beyond MCQs, we also include a lawyer-graded, paired-blinded study of long-form answers from the Supreme Court’s Advocate-on-Record exam. This is, to our knowledge, the first exam-grounded, India-specific yardstick for LLM court-readiness released with datasets and protocols. Our work shows that while frontier systems consistently clear historical cutoffs and often match or exceed recent top-scorer bands on objective exams, none surpasses the human topper on long-form reasoning. Grader notes converge on three reliability failure modes—procedural/format compliance, authority/citation discipline, and forum-appropriate voice/structure. These findings delineate where LLMs can assist (checks, cross-statute consistency, statute and precedent lookups) and where human leadership remains essential: forum-specific drafting and filing, procedural and relief strategy, reconciling authorities and exceptions, and ethical, accountable judgment.