Kaiser Sun
@kaiserwholearns.bsky.social
940 followers 170 following 20 posts
Ph.D. student at @jhuclsp, human LM that hallucinates. Formerly @MetaAI, @uwnlp, and @AWS they/them🏳️‍🌈 #NLProc #NLP Crossposting on X.
Posts Media Videos Starter Packs
kaiserwholearns.bsky.social
Congrats and welcome to the DMV area!!!
kaiserwholearns.bsky.social
🛠️ Interested in how your LLM behaves under this circumstance? We released the code to generate the diagnostic data for your own LLM.
@mdredze @loadingfan
8/8
kaiserwholearns.bsky.social
🔗 Takeaways for practitioners
1. Check for knowledge conflict before prompting.
2. Add further explanation to guide the model in following the context.
3. Monitor hallucinations even when context is supplied.
7/8
kaiserwholearns.bsky.social
📏 Implications:
⚡When using an LLM as a judge, its parametric knowledge could lead to incorrect judgment :(
⚡ Retrieval systems need mechanisms to detect and resolve contradictions, not just shove text into the prompt. 6/8
kaiserwholearns.bsky.social
🧠 Key finding #3:
“Just give them more explanation?” Providing rationales helps—it pushes models to lean more on the context—but it still can’t fully silence the stubborn parametric knowledge. 5/8
kaiserwholearns.bsky.social
⚖️ Key finding #2:
Unsurprisingly, LLMs prefer their own memories. Even when we explicitly instruct them to rely on the provided document, traces of the “wrong” internal belief keep leaking into answers. 4/8
kaiserwholearns.bsky.social
⚠️ Key finding #1:
If the task doesn’t require external knowledge (e.g., pure copy), conflict barely matters. However, as soon as knowledge is needed, accuracy tanks when context and memory disagree.
3/8
kaiserwholearns.bsky.social
🛠️ We create diagnostic data that…
- Agrees/Contradicts with the model’s knowledge
- Contradictions with different levels of plausibility
- Tasks requiring different levels of knowledge
2/8
kaiserwholearns.bsky.social
What happens when an LLM is asked to use information that contradicts its knowledge? We explore knowledge conflict in a new preprint📑
TLDR: Performance drops, and this could affect the overall performance of LLMs in model-based evaluation.📑🧵⬇️ 1/8
#NLProc #LLM #AIResearch
What Is Seen Cannot Be Unseen: The Disruptive Effect of Knowledge Conflict on Large Language Models
Large language models frequently rely on both contextual input and parametric knowledge to perform tasks. However, these sources can come into conflict, especially when retrieved documents contradict…
arxiv.org
kaiserwholearns.bsky.social
It was quite encouraging to find that many friends share my concern of "minor details" obstructing us from gaining reliable conclusions. Really hope that we all can provide well-documented experimentsl details and value the so-called "engineering contributions" more.
kaiserwholearns.bsky.social
Had so many fruitful discussions and made many friends this #NAACL2025 🌵🏜️Thanks for everyone who came to my poster or listened to me talking about my audacious thoughts! 😜

(I should have printed more stickers as they were more popular than I anticipated😅)
Reposted by Kaiser Sun
niyatibafna.bsky.social
Dialects lie on continua of (structured) linguistic variation, right? And we can’t collect data for every point on the continuum...🤔
📢 Check out DialUp, a technique to make your MT model robust to the dialect continua of its training languages, including unseen dialects.
arxiv.org/abs/2501.16581
Reposted by Kaiser Sun
esqueer.net
Meta literally created a LGBTQ exception for calling someone mentally ill as an insult. You can't do it for any other group except LGBTQ people.
The image is from a "Transparency Center" document and lists guidelines regarding acceptable and prohibited content for insults. It mentions:

1. Insults about:

Character, such as cowardice, dishonesty, criminality, and sexual promiscuity or immorality.

Mental characteristics, including but not limited to accusations of stupidity, intellectual capacity, and mental illness, as well as unsupported comparisons among politically correct (PC) groups based on inherent intellectual traits.

2. Highlighted section:

The document allows allegations of mental illness or abnormality when tied to gender or sexual orientation, referencing political and religious discourse about transgenderism and homosexuality. It also acknowledges the non-serious use of terms like "weird."
Reposted by Kaiser Sun
qi2peng2.bsky.social
with reasonable freedom, depending on the scale/focus of the business.

Case in point, we are looking to expand the research/foundation models team at Orby AI and are looking for highly motivated researchers and ML/Research engineers. Please reach out if you're interested in learning more!
/fin
Reposted by Kaiser Sun
joestacey.bsky.social
Excited to start my #ARR #NLP reviews!

I'll try my best and see if I can get 100% of my reviews to be 'great' this round.

If you didn't see it already, ARR publishes how many of your reviews are considered to be 'great': stats.aclrollingreview.org

Join me for the challenge :)
ARR Dashboard
stats.aclrollingreview.org
Reposted by Kaiser Sun
esteng.bsky.social
🚨 I am on the faculty job market this year 🚨
I will be presenting at #NeurIPS2024 and am happy to chat in-person or digitally!

I work on developing AI agents that can collaborate and communicate robustly with us and each other.

More at: esteng.github.io and in thread below

🧵👇
Reposted by Kaiser Sun
sarahooker.bsky.social
Is MMLU Western-centric? 🤔

As part of a massive cross-institutional collaboration:
🗽Find MMLU is heavily overfit to western culture
🔍 Professional annotation of cultural sensitivity data
🌍 Release improved Global-MMLU 42 languages

📜 Paper: arxiv.org/pdf/2412.03304
📂 Data: hf.co/datasets/Coh...
Reposted by Kaiser Sun
saxon.me
🚨I too am on the job market‼️🤯

I'm searching for faculty positions/postdocs in multilingual/multicultural NLP, vision+language models, and eval for genAI!

I'll be at #NeurIPS2024 presenting our work on meta-evaluation for text-to-image faithfulness! Let's chat there!

Papers in🧵, see more: saxon.me
Reposted by Kaiser Sun
kylelo.bsky.social
Excited to share OLMo 2!

🐟 7B and 13B weights, trained up to 4-5T tokens, fully open data, code, etc
🐠 better architecture and recipe for training stability
🐡 staged training, with new data mix Dolmino🍕 added during annealing
🦈 state-of-the-art OLMo 2 Instruct models

#nlp #mlsky

links below👇
A scatter plot comparing language models by performance (y-axis, measured in average performance on 10 benchmarks) versus training computational cost (x-axis, in approximate FLOPs). The plot shows OLMo 2 models (marked with stars) achieving Pareto-optimal efficiency among open models, with OLMo-2-13B and OLMo-2-7B sitting at the performance frontier relative to other open models like DCLM, Llama 3.1, StableLM 2, and Qwen 2.5. The x-axis ranges from 4x10^22 to 2x10^24 FLOPs, while the y-axis ranges from 35 to 70 benchmark points.
kaiserwholearns.bsky.social
Agree. Oth it might be helpful as a way to receive report and doubt. There is one user reported that the authors of a paper I was reviewing violate the anonymity policy by posting their submissions in public.
Reposted by Kaiser Sun
kesnet50.bsky.social
Putting together a JHU Center for Language and Speech Processing starter pack!

Please reply or DM me if you're doing research at CLSP and would like to be added - I'm still trying to find out which of us are on here so far.

go.bsky.app/JtWKca2
CLSP
Join the conversation
go.bsky.app
Reposted by Kaiser Sun
mariaa.bsky.social
A starter pack for #NLP #NLProc researchers! 🎉

go.bsky.app/SngwGeS