Ari
@ari-holtzman.bsky.social
380 followers 690 following 57 posts
Assistant Professor @ UChicago CS & DSI UChicao Leading Conceptualization Lab http://conceptualization.ai Minting new vocabulary to conceptualize generative models.
Posts Media Videos Starter Packs
ari-holtzman.bsky.social
FYI that UChicago CS & Stats is hiring at all levels via the Data Science Institue:

Postdoc: uchicago.infoready4.com#freeformComp...
Assistant Professor: apply.interfolio.com/174766
Associate Professor: apply.interfolio.com/174768
ari-holtzman.bsky.social
LLM Literalism is the most annoying parts of interacting with an LLM. If I give an example of Xand then ask it to invent its own version, it will use that example almost exactly is painful. It's like attempting to work with a lazy middle schooler that doesn't want to be there.
ari-holtzman.bsky.social
Just wrote a blog post about what my interest was in putting together HR Simulator™️

I can't speak for my colleagues...but hopefully they agree a bit :)

cichicago.substack.com/p/in-search-...
In Search of Tacit Knowledge
What do we mean when we say things?
cichicago.substack.com
ari-holtzman.bsky.social
Sora is leading a whole new kind of AI slop and, it is with great regret, that I have to admit I'm kind of into it
ari-holtzman.bsky.social
dreaming that these talks and discussions will be thought of as the birth place of Communication and Intelligence as a field
ari-holtzman.bsky.social
So...are LLMs just better than the median translator for high-resource languages? Is this just a huge shift that didn't get that much attention while everyone was debating whether AGI was a philosophical zombie that hungers for brains or something??
ari-holtzman.bsky.social
Marco,

Thank you for your candid feedback. While “thanks, I guess” may not align with our gratitude policy, it’s been logged under Provisional Appreciation – Pending Clarification™. Please circle back in 2–3 business days with a confirmed stance.

Warm regards,
Ari
ari-holtzman.bsky.social
We made a game out of corporate email hell 😈
divingwithorcas.bsky.social
HR Simulator™: a game where you gaslight, deflect, and “let’s circle back” your way to victory.
Every email a boss fight, every “per my last message” a critical hit… or maybe you just overplayed your hand 🫠
Can you earn Enlightened Bureaucrat status?

(link below!)
Reposted by Ari
divingwithorcas.bsky.social
HR Simulator™: a game where you gaslight, deflect, and “let’s circle back” your way to victory.
Every email a boss fight, every “per my last message” a critical hit… or maybe you just overplayed your hand 🫠
Can you earn Enlightened Bureaucrat status?

(link below!)
Reposted by Ari
chenhaotan.bsky.social
As AI becomes increasingly capable of conducting analyses and following instructions, my prediction is that the role of scientists will increasingly focus on identifying and selecting important problems to work on ("selector"), and effectively evaluating analyses performed by AI ("evaluator").
ari-holtzman.bsky.social
Slack & co. need temporary subset chats. Pick people from a channel, hash things out privately, auto-delete when done. This already happens (#channel-minus-boss anyone?) but it's clunky. Sometimes groups coordinate better when subsets align first. Make the backchannels official!
ari-holtzman.bsky.social
I doubt we will find a better representation than natural language for explaining AI. Instead of looking for a formal representation we can prove things about with our beautiful mathematical tools I think we should figure out what stories help people generalize approprirately.
ari-holtzman.bsky.social
try your best work on the thing you actually want to create and think about instead of the thing that has the right sounding name, which everyone mentions when you try to explain what you really care about even though you can tell they don't really understand what you're on about
ari-holtzman.bsky.social
most researchers probably don't research to be automateable (for reasonable reasons!) but personally I'd love it if most research today turned out to be easily automateable within the next 5 years—I don't think we'd have any trouble finding deeper, funkier, delightful questions
ari-holtzman.bsky.social
hypothesis: LLMs have a good predictive model of 'good writing' (based on features strongly associated with good communicators wo think deeply), but when you ask them to write something good they just produce those features decoupled from the actually interesting thinking process
ari-holtzman.bsky.social
testing a game we're building who's main mechanic is writing tricky HR emails, and noticing that LLMs have a built-in secret handshake with users to bypass safety guardrails. This seems both necessary to make LLMs actually useful and like they make guardrails essentially useless.
image image
ari-holtzman.bsky.social
in retrospect, I think a reasonable rule would have been to disallow companies from making a public API available for an LLM without publishing a cheap, accessible, 3rd-party verified CAPTCHA for it...

I think that cat's already out of the bag, sadly
ari-holtzman.bsky.social
it will take a few years, but eventually I will stop finding this kind of thing funny
image
ari-holtzman.bsky.social
thinking today about how LLMs aren't yet bicycles for the mind, but I think they're starting to look like trains for the mind: they get us all to the same stations very, very fast—but we're on foot once we get there
ari-holtzman.bsky.social
how do I convey to my students that they have permission to try something daring and fail?
ari-holtzman.bsky.social
today I'm thinking about an LLM asked to follow a fictional premise to its natural conclusion it tends to regress to the distribution of possible outcomes of fictional stories rather than following the premise in spirit
ari-holtzman.bsky.social
room temp take: I haven't seen an LLM application that goes 'deep'. It's impressive 'how little LLMs can do with so little guidance' but any extra guidance one does provide appears to make things *worse*

Claude Code is the only counterexample I can think of, what am I missing?
ari-holtzman.bsky.social
today I'm thinking about if there's a branch of knowledge that I didn't even think about learning about because I assumed no one was around to explain it to me and just passive reading wasn't enough that I could now learn by reading and discussing with an LLM. any suggestions?