Ribhu
banner
ribhulahiri.com
Ribhu
@ribhulahiri.com
Improving decision making in medicine @ Miimansa | [email protected] | 🎓UCSD, PlakshaTLF
Pinned
Folks from Delhi, or people who know folks from Delhi, or even if you're just interested in keeping up the haps of the city of hearts, I created this list today to help us all stay connected. Tag me or DM me to be added.

bsky.app/profile/did:...
Introduce yourself with:

One Book 📚
One Movie 🎥
One Album 💿
One TV Show 📺
December 10, 2025 at 3:12 AM
Every NBA season I learn of a new way to spell the name "Jaylen"
November 9, 2025 at 3:35 AM
"Complex systems should emerge from simple parts connected by clean interfaces"

The principle based on which Unix was founded, and which guides building any software systems with a degree of complexity.

Can this be replicated in AI systems? Here's some thoughts I had on the same 👇
October 18, 2025 at 12:17 PM
The amount of AI Slop on Xitter is crazy!
Yeah man we should really fight back by staying on X
September 21, 2025 at 2:01 PM
Why can't these <7B reasoning models stop yapping to themselves?
September 3, 2025 at 11:59 AM
Wait a sec.. emotion-aware reasoning???
StepFun releases Step-Audio 2!

An end-to-end LALM designed for industry-strength audio understanding and speech interaction.

✨ Emotion-aware reasoning
✨ Switch timbres with natural language
✨ Intelligent speech conversation
September 1, 2025 at 6:47 AM
Reposted by Ribhu
Blogpost to read today: strong argument that excessive focus on the first tokens is not something learned from data distribution (like model should naturally "care" about the start of the text to grasp the rest) but a fundamental feature of attention graph. publish.obsidian.md/the-tensor-t...
August 24, 2025 at 4:33 PM
Reposted by Ribhu
Now do 'causal inference'
August 10, 2025 at 3:25 AM
will we get GPT-5 before GTA 6?
August 4, 2025 at 3:01 AM
was doing some interesting work around something similar. This just catalyses it 🚀

More soon 👀
August 2, 2025 at 10:39 AM
Thankfully I don't log my guilty pleasures #LastFourWatched
July 25, 2025 at 4:01 PM
Reposted by Ribhu
Don't leave AI to the STEM folks.

They are often far worse at getting AI to do stuff than those with a liberal arts or social science bent. LLMs are built from the vast corpus human expression, and knowing the history & obscure corners of human works lets you do far more with AI & get its limits.
July 20, 2025 at 6:06 PM
Miss people don't realize it, but this is basically R1 all over again
July 16, 2025 at 3:06 AM
Reposted by Ribhu
Fully open machine learning requires not only GPU access but a community commitment to openness. (Some nostalgic lessons from the ImageNet decade.)
An open mindset
The commitments required for fully open source machine learning
www.argmin.net
July 10, 2025 at 2:28 PM
Dude was and is still way ahead of his time
Ilya on deep learning in 2015
Annotating an interview he gave at NeurIPS 2015 with my basic reflections of what works today and how people should approach working in deep learning (or getting started).
buff.ly/APz3IDj
July 3, 2025 at 3:20 AM
Reposted by Ribhu
Mamdani must explain his tweet from 1740 BC claiming that Ea-nāṣir had "high quality copper"
June 29, 2025 at 2:42 AM
Reposted by Ribhu
My friend, I *am* the tool.
June 2, 2025 at 10:15 AM
Reposted by Ribhu
I relate to llms a lot. Like them, I also spent my entire life reading books in a small room. And like them I have appallingly poor vision and performance on normal human tasks
May 30, 2025 at 4:37 PM
Reposted by Ribhu
Meta's LLM as a judge via RL

Optimizes judgement task into thoughts, scores, and judgments using GRPO. Outperforms all baselines at 8B & 70B scale, o1-mini, and on some benchmarks, even R1.
May 16, 2025 at 9:57 PM
Reposted by Ribhu
It is almost an afterthought because he was drawing on a range of existing works (especially of Markov and Zipf) and ideas. This section in Shannon is an extension of an old work by Markov, but from letters to words, making use of the concepts introduced in this paper. Markov's paper was well-known.
Is 1948 widely acknowledged as the birth of language models and tokenizers?

In "A Mathematical Theory of Communication", almost as an afterthought Shannon suggests the N-gram for generating English, and that word level tokenization is better than character level tokenization.
May 7, 2025 at 9:35 PM
The Internet Experience of 2025 in a nutshell
May 5, 2025 at 3:29 AM
I don't know why the mainstream doesn't get this nuance
Again, saying AI is useful does not mean that AI should not be critiqued, criticized, or even rejected. But it is dishonest to say it has no value.
April 27, 2025 at 9:40 AM
From doing a shadow drop of the Oblivion remaster to giving free game keys to the Skyblivion devs, Bethesda is really going back to the good old days of being in touch with the gaming community
April 25, 2025 at 4:31 PM
Reposted by Ribhu
From Daak Vaak
April 24, 2025 at 2:18 PM
I don't know if this is hilarious or just plain sad
The internet needs a new rule, the Twitter equivalent of rule 34, that whenever you say "nobody believes <outlandish thing>," the person who believes it will show up.

Perhaps the "Law of ROUS."
April 24, 2025 at 3:08 PM