Yekyung Kim
@yekyung.bsky.social
39 followers 110 following 8 posts
PhD student @ UMass NLP
Posts Media Videos Starter Packs
yekyung.bsky.social
Reasoning models "overthink" simple tasks! 🤯

o3-mini-high and Deepseek-R1 overthink for a word frequency task! Also, Incorrect answers often had longer reasoning chains than correct ones.

More reasoning ≠ better accuracy!
yekyung.bsky.social
Instruction language shifts accuracy by up to 20%! 🏗️

📉 🇺🇸 context + 🇰🇷 instruction → 91% → 71%
📈 🇰🇷 context + 🇺🇸 instruction → 67% → 75%

Instruction language matters more than expected for multilingual LLMs!
yekyung.bsky.social
🏆 Gemini 1.5 Flash shines in Sesotho & Swahili, but struggles on non-Latin scripts like ZH, KO and HI.
🚨 o3-mini-high underperforms on English at long contexts.
📊 Qwen2.5 > LLaMA 3.3 across all context lengths.
🚩 Non-Latin & non-Cyrillic scripts remain a challenge.
yekyung.bsky.social
The "nonexistent needle" problem 🪡

We added the option to answer "none" if the needle wasn’t in the context. 🚨 o3-mini-high especially struggled, accuracy dropped 32% at 128K! It frequently answers "none" even when the needle was there.
yekyung.bsky.social
Performance gaps grow with context length! ⏳

At 8K tokens, high vs. low-resource language gap = 11%
At 128K tokens, the gap triples to 34%! 📉

LLMs struggle to generalize long-context skills across diverse languages.
yekyung.bsky.social
English ranks only 6th! 🤯

🇵🇱 Polish takes the top spot, while 🇨🇳 Chinese ranks 4th from the bottom, despite forming a large proportion of pretraining data.

Slavic, Romance & Germanic languages dominate, suggesting long-context strength isn’t just about training data size!
yekyung.bsky.social
Is the needle-in-a-haystack test still meaningful given the giant green heatmaps in modern LLM papers?

We create ONERULER 💍, a multilingual long-context benchmark that allows for nonexistent needles. Turns out NIAH isn't so easy after all!

Our analysis across 26 languages 🧵👇
Reposted by Yekyung Kim
lasha.bsky.social
✨I am on the faculty job market in the 2024-2025 cycle!✨

My research centers on advancing Responsible AI, specifically enhancing factuality, robustness, and transparency in AI systems.

If you have relevant positions, let me know! lasharavichander.github.io Please share/RT!
Abhilasha Ravichander - Home
lasharavichander.github.io
Reposted by Yekyung Kim
chautmpham.bsky.social
Long-form text generation with multiple stylistic and semantic constraints remains largely unexplored.

We present Suri 🦙: a dataset of 20K long-form texts & LLM-generated, backtranslated instructions with complex constraints.

📎 arxiv.org/abs/2406.19371
Reposted by Yekyung Kim
markar.bsky.social
I really wanted to run NEW #nocha benchmark claims on #o1 but it won't behave 😠
- 6k reasoning tokens is often not enough to get an ans and more means being able to process only short books
- OpenAI adds sth to the prompt: ~8k extra tokens-> less room for book+reason+generation!
Image showing prompt token count as per the tokenizer (tiktoken) which is 117,609 tokens, and as per what openai API claims it to be, which is 125,385 tokens. There is about 7000 extra tokens added coming from who knows where.
Reposted by Yekyung Kim
yapeichang.bsky.social
🌊Heading to #EMNLP2024 tmr, presenting PostMark on Tue. morning! 🔗 arxiv.org/abs/2406.14517

Aside from this, I'd love to chat about:
• long-context training
• realistic & hard eval
• synthetic data
• tbh any cool projects people are working on

Also, I'm on the lookout for a summer 2025 internship!