Nishant Balepur

Ph.D. Student in Computer Science at University of Maryland, College Park

profile.png

Email:

nbalepur[at]umd[dot]edu

Hello! My name is Nishant and I’m a first-year Ph.D. student in the CLIP Lab at the University of Maryland, where I am fortunate to be advised by Professors Jordan Boyd-Grayber and Rachel Rudinger. As of 2023, I am extremely thankful to be funded by the NSF GRFP.

I am currently working on aligning, interpreting, and guiding Muppet/LLM outputs. My research can broadly be grouped into three questions:

  1. How can we make model outputs more factual? [EMNLP 2023a, EMNLP 2023b]
  2. How can users guide model outputs? [ACL 2023, Arxiv 2024a]
  3. How can we interpret the safety and reliability of model outputs? [Arxiv 2023, Arxiv 2024b]

I also love discovering unconventional/out-of-domain model weaknesses and subsequently designing frameworks to overcome these issues or probe why they happen. As a result, a lot of my research starts with an observation after trying to break current models.

If you are looking for research experience (especially UMD students), or have any questions about NSF or anything else, please reach out!


📝 Selected Publications

2024

  1. Preprint
    Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question?
    Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger
    arXiv preprint, 2024

2023

  1. Preprint
    It’s Not Easy Being Wrong: Large Language Models Struggle with Process of Elimination Reasoning
    Nishant Balepur, Shramay Palta, and Rachel Rudinger
    arXiv preprint, 2023
  2. EMNLP 2023
    Expository Text Generation: Imitate, Retrieve, Paraphrase
    Nishant Balepur, Jie Huang, and Kevin Chen-Chuan Chang
    In The 2023 Conference on Empirical Methods in Natural Language Processing: EMNLP 2023, 2023

🤩 Research Highlights

Apr 22, 2024 Awarded the Cohere For AI Research Grant for our NLP+Education work with KAR³L. Excited for this collaboration!
Feb 20, 2024 Two new preprints on LLM interpretability and NLP for education! See the papers here: Artifacts or Abduction: How Do LLMs Answer Multiple-Choice Questions Without the Question? and KARL: Knowledge-Aware Retrieval and Representations aid Retention and Learning in Students
Nov 13, 2023 New preprint on LLM reasoning! View it here: It’s Not Easy Being Wrong: Large Language Models Struggle with Process of Elimination Reasoning
Oct 7, 2023 Two long papers were accepted to EMNLP 2023 main conference! See the papers here: Expository Text Generation: Imitate, Retrieve, Paraphrase and Text Fact Transfer
May 1, 2023 My first ever (first-authored) paper on user-guided dynamic topic mining was accepted to Findings of ACL 2023! See the paper here: DynaMiTE: Discovering Explosive Topic Evolutions with User Guidance
Mar 29, 2023 Extremely honored to recieve the NSF Graduate Research Fellowship. Thank you to everyone who made this possible!

😔 Research Lowlights

Apr 15, 2024 One paper not committed to ACL 2024
Feb 15, 2024 Two papers not committed to NAACL 2024
Oct 6, 2023 One paper rejected from EMNLP 2023
Mar 20, 2023 My first ever review score of 1 recieved on an ARR submission