Nishant Balepur

Ph.D. Student in Computer Science at University of Maryland, College Park

profile.png

Email:

nbalepur[at]umd[dot]edu

Hello! My name is Nishant and I’m a first-year Ph.D. student at the University of Maryland, where I am fortunate to be advised by Professors Jordan Boyd-Grayber and Rachel Rudinger. My work is graciously supported by the NSF GRFP and a Cohere For AI Research Grant.

I am currently working on aligning, interpreting, and guiding Muppet (LLM) outputs. My research can broadly be grouped into three questions:

  1. How can we make model outputs more factual? [EMNLP 2023a, EMNLP 2023b]
  2. How can users guide model outputs? [ACL 2023, Arxiv 2024a]
  3. How can we interpret the safety and reliability of model outputs? [Arxiv 2023, Arxiv 2024b]

I also love discovering unconventional/out-of-domain model weaknesses and subsequently designing frameworks to overcome these issues or probe why they happen. As a result, a lot of my research starts with an observation after trying to break current models.

If you are looking for research experience (especially UMD students), or have any questions about NSF or anything else, please reach out!


📝 Selected Publications

2024

  1. Preprint
    Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?
    Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger
    arXiv preprint, 2024
    Oral Presentation (7%) and Best Paper Award (4%) at MASC-SSL 2024

2023

  1. Preprint
    It’s Not Easy Being Wrong: Large Language Models Struggle with Process of Elimination Reasoning
    Nishant Balepur, Shramay Palta, and Rachel Rudinger
    arXiv preprint, 2023
  2. EMNLP 2023
    Expository Text Generation: Imitate, Retrieve, Paraphrase
    Nishant Balepur, Jie Huang, and Kevin Chen-Chuan Chang
    In The 2023 Conference on Empirical Methods in Natural Language Processing: EMNLP 2023, 2023

🥳 Me Bragging

May 3, 2024 Presented our work Aritfacts or Abduction for MASC 2024 at Hopkins (1 of 5 selected orals). Also extremely grateful to be selected for 1 of 3 best paper awards! You can view my slides here
Apr 22, 2024 Awarded a Cohere For AI Research Grant for our NLP+Education work with KAR³L. Excited for this collaboration!
Feb 20, 2024 Two new preprints on LLM interpretability and NLP for education! See the papers here: Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question? and KARL: Knowledge-Aware Retrieval and Representations aid Retention and Learning in Students
Nov 13, 2023 New preprint on LLM reasoning! View it here: It’s Not Easy Being Wrong: Large Language Models Struggle with Process of Elimination Reasoning
Oct 7, 2023 Two long papers were accepted to EMNLP 2023 main conference! See the papers here: Expository Text Generation: Imitate, Retrieve, Paraphrase and Text Fact Transfer

😔 Me Being Honest

Apr 15, 2024 One paper not committed to ACL 2024
Feb 15, 2024 Two papers not committed to NAACL 2024
Oct 6, 2023 One paper rejected from EMNLP 2023
Mar 20, 2023 My first ever review score of 1 recieved on an ARR submission