Changdae Oh
changdaeoh.bsky.social
Changdae Oh
@changdaeoh.bsky.social
CS PhD Student @ UW-Madison

Distribution Shifts, Uncertainty Quantification, Multimodal, LLM Agents

Prev: NAVER AI Lab, CMU, USeoul
https://changdaeoh.github.io/
Pinned
We tend to conflate "autonomy" with "reliability" in AI agents. But autonomy without trust is catastrophically dangerous.

Our new paper formalizes UQ for LLM agents, proposes a new lens: agent uncertainty as a conditional uncertainty reduction process.
📄 huggingface.co/papers/2602....
We tend to conflate "autonomy" with "reliability" in AI agents. But autonomy without trust is catastrophically dangerous.

Our new paper formalizes UQ for LLM agents, proposes a new lens: agent uncertainty as a conditional uncertainty reduction process.
📄 huggingface.co/papers/2602....
February 7, 2026 at 4:34 PM
Reposted by Changdae Oh
Heading to #NeurIPS2024 to present our ‘Fair RAG’ paper at the #AFME2024 workshop! Let's talk about RAG, Information Retrieval, and Fairness. Honored that our paper was selected as one of the Top 5 Spotlight Papers! 🎉 Let’s connect and chat!
Paper: arxiv.org/abs/2409.11598
Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation
Many language models now enhance their responses with retrieval capabilities, leading to the widespread adoption of retrieval-augmented generation (RAG) systems. However, despite retrieval being a cor...
arxiv.org
December 9, 2024 at 9:19 PM