Mandeep Rathee
mandeeprathee.bsky.social
Mandeep Rathee
@mandeeprathee.bsky.social
PhD Researcher in Information Retrieval
Reposted by Mandeep Rathee
📢 𝐓𝐡𝐞 #j𝐮𝐬𝐭𝐑𝐄𝐀𝐂𝐇 𝐩𝐫𝐨𝐣𝐞𝐜𝐭 𝐡𝐚𝐬 𝐨𝐟𝐟𝐢𝐜𝐢𝐚𝐥𝐥𝐲 𝐛𝐞𝐠𝐮𝐧!

🎯JustREACH is dedicated to advancing #JustClimateResilience by empowering local and regional authorities, industries, businesses, and citizens to effectively implement climate adaptation and smart specialization plans.

✅Follow for more updates!
September 17, 2025 at 12:48 PM
I will present our paper “Breaking the lens of the Telescope: Online Relevance Estimation over Large Retrieval Sets” at #SIGIR2025
🕰️ 10:30 AM (16.07.2025)
📍Location: GIOTTO (Floor 0)
Full Paper: dl.acm.org/doi/10.1145/...
Slides: sigir2025.dei.unipd.it/detailed-pro...
July 15, 2025 at 10:54 PM
🎉 Thrilled to share that I will be presenting our paper SUNAR at #NAACL2025! on May 6, 2025

We introduce a novel approach that leverages LLMs to guide neighborhood-aware retrieval for complex QA.

This work is done with Venktesh and Avishek Anand.

Link to the paper: arxiv.org/pdf/2503.17990
arxiv.org
April 28, 2025 at 1:56 PM
Had a great time at #ECIR2025! 🎉 in Lucca, Italy. I met some amazing researchers. Really enjoyed the city and Italian food is amazing.
April 10, 2025 at 7:09 AM
Excited to share that I’ll be presenting my paper "Guiding Retrieval using LLM-based Listwise Rankers" at #ECIR2025 on this Wednesday!

This work looks at using LLMs to improve retrieval via listwise ranking.

📄 Paper: arxiv.org/pdf/2501.09186
💻 Code: github.com/Mandeep-Rath...

#IR #LLM #ECIR2025
arxiv.org
April 5, 2025 at 6:37 PM
Reposted by Mandeep Rathee
Guiding Retrieval using LLM-based Listwise Rankers

Introduces a method to integrate listwise LLM rerankers into adaptive retrieval, improving nDCG@10 by up to 13.23% and recall by 28.02% while maintaining efficiency.

📝 arxiv.org/abs/2501.09186
👨🏽‍💻 github.com/Mandeep-Rath...
Guiding Retrieval using LLM-based Listwise Rankers
Large Language Models (LLMs) have shown strong promise as rerankers, especially in ``listwise'' settings where an LLM is prompted to rerank several search results at once. However, this ``cascading'' ...
arxiv.org
January 17, 2025 at 3:07 AM