Kate Sanders
@kesnet50.bsky.social
790 followers 390 following 20 posts
Final year Ph.D. candidate in NLP, CV at JHU. Researching reasoning systems, multimodality, and AI for science. On the job market for full-time industry positions! #NLProc https://katesanders9.github.io/
Posts Media Videos Starter Packs
Reposted by Kate Sanders
colmweb.org
Keynote spotlight #4: the second day of COLM will close with @ghadfield.bsky.social from JHU talking about human society alignment, and lessons for AI alignment
Reposted by Kate Sanders
cornelltech.bsky.social
Congratulations to Alane Suhr '22, a #CornellTech Ph.D. #alumni advised by associate professor Yoav Artzi, for receiving the prestigious 2022 @aaai.org / @acmsigai.bsky.social Doctoral Dissertation Award!

Read more about the award here: aaai.org/about-aaai/a...

@yoavartzi.com
AAAI/ACM SIGAI Doctoral Dissertation Award - AAAI
The AAAI/ACM SIGAI Doctoral Dissertation Award recognizes and encourages superior research and writing by doctoral candidates in AI.
aaai.org
Reposted by Kate Sanders
conradhackett.bsky.social
Time for the world to install a gigawatt of solar power capacity
2004: A year
2010: ~ a month
2015: ~ a week
Now: A day
ourworldindata.org/data-insight... 🧪
Line chart showing that there's been a rapid escalation in how quickly the world installs a gigawatt of solar power capacity.
Reposted by Kate Sanders
kavi.bsky.social
🚨 Urban Stats 28.0.0 🚨

The mapper is now completely redesigned by me and @spudwaffle.bsky.social, allowing for much prettier looking maps and way more customization alongside significantly more options for geographies!

See below for some of the examples of the maps you can create!
Reposted by Kate Sanders
yoavgo.bsky.social
When reading AI reasoning text (aka CoT), we (humans) form a narrative about the underlying computation process, which we take as a transparent explanation of model behavior. But what if our narratives are wrong? We measure that and find it usually is.

Now on arXiv: arxiv.org/abs/2508.16599
Humans Perceive Wrong Narratives from AI Reasoning Texts
A new generation of AI models generates step-by-step reasoning text before producing an answer. This text appears to offer a human-readable window into their computation process, and is increasingly r...
arxiv.org
Reposted by Kate Sanders
danielkhashabi.bsky.social
So, what's the future of AI safety benchmarks? Jack's solution is "renewable benchmarks" that allows us to refresh and expand benchmarks with a single click!!
x.com/jackjingyuz...
Reposted by Kate Sanders
rachelfloodheaton.bsky.social
In our forthcoming paper, John Hummel and I ask what it would mean for a neural computing architecture such as a brain to implement a symbol system, and the related question of what makes it difficult for them to do so, with an eye toward the differences between humans, animals, and ANNs.
From Basic Affordances to Symbolic Thought: A Computational Phylogenesis of Biological Intelligence
What is it about human brains that allows us to reason symbolically whereas most other animals cannot? There is evidence that dynamic binding, the ability to combine neurons into groups on the fly, is...
arxiv.org
Reposted by Kate Sanders
shahabbakht.bsky.social
This paper is making the rounds: arxiv.org/abs/2506.21734

A tiny (27M) brain-inspired model trained just on 1000 samples outperforming o3-mini-high on reasoning tasks.

#MLSky 🧠🤖
Reposted by Kate Sanders
ptnobel.bsky.social
Interested in large-scale GPU optimization? Interested in how modern neural networks are being deployed to solve classical optimization problems?

Writing a paper on these topics? Submit to the ScaleOPT workshop at NeurIPS!

www.cvxgrp.org/scaleopt/#su...
ScaleOPT
www.cvxgrp.org
Reposted by Kate Sanders
aryamccarthy.bsky.social
I'm recruiting MLEs @ #ACL2025!

Reach out if you know folks interested in legal NLP, structured prediction, and full-time at a startup environment in NYC

I'll also always chat about:
• population-level inference on corpora
• broad-coverage semantics
• which café has the best Sachertorte in Vienna
Reposted by Kate Sanders
boydgraber.bsky.social
My students and I are presenting three papers on Monday at #ACL2025 and this thread will recap them (including their videos).
kesnet50.bsky.social
Taking off for Vienna #ACL2025! 🇦🇹 Excited to talk with people about transparent reasoning, multimodality, and fact verification. Stop by our multimodal RAG workshop on Friday 🔥🔥🔥

Please reach out if you want to grab coffee!
magmar-workshop.bsky.social
New Workshop on Multimodal Augmented Generation via MultimodAl Retrieval (MAGMaR) to be held at @aclmeeting.bsky.social ACL in Vienna this summer. We have a new shared task that stumps most LLMs - including ones pretrained on our test collection. nlp.jhu.edu/magmar/
MAGMaR Workshop
MAGMaR
nlp.jhu.edu
Reposted by Kate Sanders
mariaa.bsky.social
The #ACL2025 #ACL2025NLP feed is up and running! It matches both hashtags and any posts from or mentions of @aclmeeting.bsky.social

Pin it to your home 📌 and enjoy!

bsky.app/profile/did:...
Reposted by Kate Sanders
kavi.bsky.social
Juxtastat DAU update! Crazy how we've been >1000 every day for over a year now!

Thank you all for all your support, and make sure to keep spreading the word!
Reposted by Kate Sanders
aclmeeting.bsky.social
🥳 🎉 ❤️ The ACL 2025 Proceedings are live on the ACL Anthology 🥰 !
We’re thrilled to pre-celebrate the incredible research 📚 ✨ that will be presented starting Monday next week in Vienna 🇦🇹 !
Start exploring 👉 aclanthology.org/events/acl-2...
#NLProc #ACL2025NLP #ACLAnthology
Annual Meeting of the Association for Computational Linguistics (2025) - ACL Anthology
pdf bibProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)Wanxiang Che | Joyce Nabende | Ekaterina Shutova | Mohammad Taher Pilehvar
aclanthology.org
Reposted by Kate Sanders
matttomic.bsky.social
This New Yorker piece is the most hopeful I've felt about the world in a long time.

I had no idea solar was booming like this. And if you live in the same world as me, dominated by oil & gas guys maintaining that solar and wind are inefficient gimmicks, you might not've known some of this either.
It took from the invention of the photovoltaic solar cell, in 1954, until 2022 for the world to install a terawatt of solar power; the second terawatt came just two years later, and the third will arrive either later this year or early next.
That’s because people are now putting up a gigawatt’s worth of solar panels, the rough equivalent of the power generated by one coal-fired plant, every fifteen hours. Solar power is now growing faster than any power source in history, and it is closely followed by wind power—which is really another form of energy from the sun, since it is differential heating of the earth that produces the wind that turns the turbines.
Last year, ninety-six per cent of the global demand for new electricity was met by renewables, and in the United States ninety-three per cent of new generating capacity came from solar, wind, and an ever-increasing variety of batteries to store that power.
Reposted by Kate Sanders
niyatibafna.bsky.social
🔈When LLMs solve tasks with a mid-to-low resource input or target language, their output quality is poor. We know that. But can we put our finger on what breaks inside the LLM? We introduce the 💥 translation barrier hypothesis 💥 for failed multilingual generation with LLMs. arxiv.org/abs/2506.22724
Reposted by Kate Sanders
nsaphra.bsky.social
I wrote something up for AI people who want to get into bluesky and either couldn't assemble an exciting feed or gave up doomscrolling when their Following feed switched to talking politics 24/7.
The AI Researcher's Guide to a Non-Boring Bluesky Feed | Naomi Saphra
How to migrate to bsky without a boring feed.
nsaphra.net
Reposted by Kate Sanders
lelandmcinnes.bsky.social
Explore Wikipedia through a data map. Pages are grouped by semantic similarity, for topic clusters.
Hover to see details, zoom to explore more fine-grained topics, click to go to a page. Search by page
name to find interesting starting points for exploration.

lmcinnes.github.io/datamapplot_...
Reposted by Kate Sanders
arxiv-cs-cl.bsky.social
William Walden, Kathryn Ricci, Miriam Wanner, Zhengping Jiang, Chandler May, Rongkun Zhou, Benjamin Van Durme
How Grounded is Wikipedia? A Study on Structured Evidential Support
https://arxiv.org/abs/2506.12637
Reposted by Kate Sanders
kaiserwholearns.bsky.social
What happens when an LLM is asked to use information that contradicts its knowledge? We explore knowledge conflict in a new preprint📑
TLDR: Performance drops, and this could affect the overall performance of LLMs in model-based evaluation.📑🧵⬇️ 1/8
#NLProc #LLM #AIResearch
What Is Seen Cannot Be Unseen: The Disruptive Effect of Knowledge Conflict on Large Language Models
Large language models frequently rely on both contextual input and parametric knowledge to perform tasks. However, these sources can come into conflict, especially when retrieved documents contradict…
arxiv.org