Shadab Choudhury
banner
namer.bsky.social
Shadab Choudhury
@namer.bsky.social
(He/Him). Previously at LARA Lab @UMBC, Mohsin Lab @BRACU | Accessibility, Explainability and Multimodal DL. My opinions are mine.
I'm on the PhD application cycle for Fall '26!
www.shadabchy.com
That's not remotely the issue here though. LLMs can already generate stories/code/designs. World models aren't the same thing as persistent memory.

It can't generate *good* stories for reasons of verifiability an un-RL-ability, as said above, and world models won't change that at all.
February 10, 2026 at 11:07 AM
The neatest thing is just how much it looks like mold growing on the fruits. A nicely picked example.
February 6, 2026 at 10:25 PM
Like, if you don't know how to I'm happy to show you. It's not hard or something and it just speeds up your workflow.

Copying and pasting each of your citations into ChatGPT is just silly.

(img source: the other site)
January 22, 2026 at 10:58 AM
Why on earth would you even do this in the first place?

Pasting the details into ChatGPT, then asking it to generate the citation is a way bigger hassle than simply having Scholar + Zotero (or another bib manager) extensions set up correctly to grab metadata and generate citations.
January 22, 2026 at 10:53 AM
Lowkey, I think this is intentional, and it seems like the kind of thing I would do if I was making Claude more attractive to students or other people who don't want their code to be easily recognized as LLM-generated.
January 8, 2026 at 9:26 PM
It was great listening to Dr's Ishtiaque, Ferdous, and Sultana talk about their experiences!

There's a *lot* of HCI work to be done in the scope of Bangladesh, and IMO not enough folks working on them, so HCCS should be a great addition. Looking forward to the work soon to come out of the lab.
January 4, 2026 at 1:29 PM
lowkey I appreciate folks aren't posting linkedinisms like "In 2025 I achieved X, Y, Z" this time around.

It's fine to celebrate your wins, but I imagine for most people, 2025 was...

A Year.

...and I think that's all that needs to be said.
January 2, 2026 at 11:56 PM
Nahhh my high school friends would've also found that name funny as fuck 10 years ago
January 1, 2026 at 9:21 AM
Seen on LinkedIn.

How does someone raise $5 million and then write job ads like this?

smh...
December 25, 2025 at 2:41 PM
I don't think "money" is the simple answer, since every frontier lab is a black hole of money rn, and gene editing could've also been ludicrously profitable.

Was it the political climate? The everyday accessibility of GenAI? Lower levels of scruples in the community (not to accuse anyone directly)?
December 21, 2025 at 2:30 PM
In the mid 2010s biology folks figured out how to do human gene editing. The community took one look, realized the consequences would be so dire for humanity, and put a hard stop to it. People like Jiankui He were excoriated for illegally editing embryos.

Why wasn't this the case with GenAI?
December 21, 2025 at 2:27 PM
And it's not ML, it's "GenAI" that invokes certain concerns.

People don't mind when ML's used to detect cancer or study whale speech. Those models aren't trained on human inputs and used to take human jobs. GenAI specifically, however, is trained on human inputs and used to take human jobs.
December 17, 2025 at 2:35 PM
Nah. That article's 8mo old and says they're "not there yet". Vincke's recent comment, however, explicitly mentions flesh out PowerPoint presentations, develop concept art" and these tasks are explicitly creative jobs lost.
December 17, 2025 at 2:31 PM
by perchance is it specifically like these 5 companies?
December 16, 2025 at 10:46 PM
I hope the Peer Reviewer Recognition Policy actually puts the <$30~ per paper reviewed (based on 3 reviewers per $100 paper minus waived papers) to good use.
December 4, 2025 at 7:25 PM
How was no one talking about this?* IJCAI-ECAI 2026 @ijcai.org levying a $100 fee per submission unless every author on the paper is only on that one submitted paper.

* rhetorical question. I assume the ICLR drama drowned it
December 4, 2025 at 7:22 PM
Basically, if your Altmetric score is higher than your Accesses, you just got ratioed.
November 29, 2025 at 1:33 PM
Case in point.

Good grief.
November 27, 2025 at 10:37 PM
Took me about 5 minutes to dig out the identity of the 40 questions reviewer — after someone posted it on the other site; I dunno how to search Xiaohongshu directly.

Honestly, I don't think western academics are going to feel a fraction of the shitstorm that the Chinese ML community's probably in.
OpenReview had a bug that allowed crawling reviewer names for ICLR (and apparently for all previous and ongoing conferences), and it had been known since Nov 12 ................
November 27, 2025 at 7:29 PM
This is the site I unfortunately have to be professional on 😭
November 20, 2025 at 6:17 PM
many such cases

Simpsons-tier prescience
Extremely upset that a throw-away XKCD joke somehow became the organizing principle for the Internet.
November 20, 2025 at 7:53 AM
I understand the sentiment behind this, but I'm just extremely bearish on rankings that do fancy mathematical tricks or use black-box algorithms because the more of that you do, the more you can bias it towards a specific outcome.

The closer the metric is to the raw data instead, the better.
November 16, 2025 at 1:14 PM
Two issues: first, this is not transparent. There's NO way to tell *which* papers were counted, nor how 'most important papers to this paper' is computed. The papers contributing to the ranking should be listed.

Second, CSRankings is CC BY-NC-ND 4.0 so I'm pretty sure this is copyright infringement
November 16, 2025 at 1:09 PM
I can't believe I spent the evening skimming through every Visual Reasoning paper at ICLR instead of finishing my SoP.
November 15, 2025 at 2:58 PM
It may be in bad taste to call out a reviewer like this, but I don't believe anyone who gives weaknesses like this, as if it isn't empirical common sense for anyone working with MLLMs that larger models give better outcomes when inference speed isn't relevant, is acting in good faith.
November 15, 2025 at 2:48 PM