Dimitris Papailiopoulos
banner
dimitrisp.bsky.social
Dimitris Papailiopoulos
@dimitrisp.bsky.social
Researcher @MSFTResearch; Prof @UWMadison (on leave); learning in context; thinking about reasoning; babas of Inez Lily.
https://papail.io
In the mean time, here's a rule of thumb "if your project can be vibecoded in an hour, and amounts to O(10) LoC edits on something existing, or is a convergence proof that o4-mini can do with a bit of guidance, DO NOT write a paper about it":D
May 16, 2025 at 2:57 PM
I think that the current most bullet proof peer review has been "people will read/try your stuff, and if it works they build on it". But because it's not attached to a formal process on openreview we discard it as being non-scientific.
May 16, 2025 at 2:57 PM
It seems to me that is totally misaligned with scientific discovery and progress. I don't believe this is a result of bad actors btw. It's just that huge, and complex systems that are O(100) years old take a long time to change, and readjust to new realities. We'll eventually figure it out.
May 16, 2025 at 2:57 PM
it seems to me that mostly ML academia (i am part of it!) is a proponent of keeping peer review and mega ML conferences going & the bean counter running. We've not found a solution to reviews converging to random coin tosses, at a huge expense of human work hours.
May 16, 2025 at 2:57 PM
If that's indeed the case (i believe we can measure that), and their key function is social, and a way for people to connect (that's great!), what's the point of having peer review, and using # neurips papers as a bean counter?
May 16, 2025 at 2:57 PM
my post is a direct criticism to the 100k neurips submissions issue. It's beyond clear that research dissemination--for the most part--does not happen through conferences any more.
May 16, 2025 at 2:57 PM
What if for most of your findings you just post a thread and share a GitHub repo, rather than submitting a 15 page NeurIPS paper with < 1/100 the reach?
May 16, 2025 at 2:57 PM
LLMs learn world models, beyond a reasonable doubt. It's been the case since GPT-3, but now it should be even more clear. Without them "Guess and Check" would not work.

The fact that these "world models" are approximate/incomplete does not disqualify them.
May 12, 2025 at 6:38 PM
Working on the yapping part :)
May 8, 2025 at 3:58 AM
hmm.. temp has to be 0.6-0.8, this looks like very low temp outputs
May 8, 2025 at 2:31 AM
I don’t see at all how this is intellectually close to what Shannon wrote. Can you clarify? I read it as computing statistics and how these are compatible with theoretical conjectures. There’s no language generation implicit in the article. Am I misreading it?
May 7, 2025 at 11:02 PM
can you share the paper?
May 7, 2025 at 10:05 PM
BTW for historical context, 1948, is very very very early to have these thoughts. So i actually think that every single sentence written is profound. This is kinda random, but here is how Greece looked back then. IT WAS SO EARLY :) x.com/DimitrisPapa...
Dimitris Papailiopoulos on X: "For historical context, here are some pictures of the Greek islands in the late 40s https://t.co/WJsM83IHW1" / X
For historical context, here are some pictures of the Greek islands in the late 40s https://t.co/WJsM83IHW1
x.com
May 7, 2025 at 6:44 PM
it's not that profound. it just says, there's no wall, if all stars are aligned. it's an optimistic read of the setting.
May 7, 2025 at 5:24 PM
Is 1948 widely acknowledged as the birth of language models and tokenizers?

In "A Mathematical Theory of Communication", almost as an afterthought Shannon suggests the N-gram for generating English, and that word level tokenization is better than character level tokenization.
May 7, 2025 at 12:05 PM
Reposted by Dimitris Papailiopoulos
🎉The Phi-4 reasoning models have landed on HF and Azure AI Foundry. The new models are competitive and often outperform much larger frontier models. It is exciting to see the reasoning capabilities extend to more domains beyond math, including algorithmic reasoning, calendar planning, and coding.
May 1, 2025 at 12:50 AM
I am afraid to report, RL works.

I think 2-3 years ago, I said I will not work on two ML sub-areas. RL was one of them. I am happy to say that I am not strongly attached to my beliefs.
April 30, 2025 at 8:08 PM
researchers
April 30, 2025 at 4:43 PM
Re: The Chatbot Arena Illusion

Every eval chokes under hill climbing. If we're lucky, there’s an early phase where *real* learning (both model and community) can occur. I'd argue that a benchmark’s value lies entirely in that window. So the real question is what did we learn?
April 30, 2025 at 4:38 PM
Also a sycophant etymologically means "the one who shows the figs"; the origin of the meaning is kinda debated, either refers to illegally importing figs, or to falsely accusing someone of hiding illegally imported figs
April 28, 2025 at 1:58 PM
Fun trivia now that “sycophant” became common language to describe LLMs flattering users:

In Greek, συκοφάντης (sykophántēs) most typically refers to a malicious slanderer, someone spreading lies, not flattery!

Every time you use it, you’re technically using it wrong :D
April 28, 2025 at 1:58 PM
Come work with us at MSR AI Frontiers and help us figure out reasoning!
We're hiring at the Senior Researcher level (eg post phd).
Please drop me a DM if you do!
jobs.careers.microsoft.com/us/en/job/17...
Search Jobs | Microsoft Careers
jobs.careers.microsoft.com
February 21, 2025 at 3:48 PM
bsky doesn't like GIFs, here they are from the other site x.com/DimitrisPapa...
x.com
x.com
February 13, 2025 at 1:41 PM
Super proud of this work that was led by Nayoung Lee and Jack Cai, with mentorship from Avi Schwarzschild and Kangwook Lee

link to our paper: arxiv.org/abs/2502.01612
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Large language models often struggle with length generalization and solving complex problem instances beyond their training distribution. We present a self-improvement approach where models iterativel...
arxiv.org
February 13, 2025 at 1:33 PM
Oh btw, self improvement can become exponentially faster in some settings, ory when we apply it on pretrained models (again this is all for add/mul/maze etc)
February 13, 2025 at 1:33 PM