Kartik Chandra
@kartikchandra.bsky.social
1.3K followers 100 following 37 posts
I'm a PhD student at MIT CSAIL. More about me: https://cs.stanford.edu/~kach
Posts Media Videos Starter Packs
kartikchandra.bsky.social
Congratulations, Charley! I'm so excited to see how your lab will grow and evolve over the coming years. :)
kartikchandra.bsky.social
I just had a lovely morning teaching memo with Lio Wong at #COSMOS2025 in Tokyo! Charley and Wataru have put together an absolutely *fantastic* summer school. Fascinating talks, delightful people… and excellent location. I feel so lucky to be here. If you ever get a chance to attend COSMOS, take it!
thecharleywu.bsky.social
Now up at #COSMOS2025: @kartikchandra.bsky.social & Lio Wong giving a tutorial on recursive social reasoning using MEMO github.com/kach/memo
Fun fact: those flowers between them and 🗻 are called "cosmos".
Collab notebook here to follow along 👉 cosmossummerschool.github.io/materials/#g...
kartikchandra.bsky.social
The full, uncropped picture explains what is going on here: the sun is reflected by two buildings with different window tints. The reflection from the blue building casts the yellow shadow, and vice versa. (Studying graphics reminds me just how overwhelmingly beautiful the everyday visual world is…)
kartikchandra.bsky.social
Three years ago, at SIGGRAPH '22 in Vancouver, I took this picture of a pole casting two shadows: one blue and one yellow, from yellow and blue streetlamps respectively.

Today, back in Vancouver for SIGGRAPH '25, I saw the same effect in sunlight! How can one sun cast two colored shadows? Hint in 🧵
kartikchandra.bsky.social
While I was in Rotterdam for CogSci '24, I visited the Escher Museum and fell in love with his work all over again. Now, a year later, I'm delighted by this SIGGRAPH paper led by my friend Ana!

(Say what you will about the technical details, you must admit that we came up with the ~perfect~ title.)
kartikchandra.bsky.social
Thanks to a rogue Partiful RSVP form at #cogsci2025, I seem to have collected an unexpectedly large dataset (N=197) of whether cognitive scientists think the mind is composed of innate, domain-specialized modules…
A bar chart showing frequencies of answers to the question "Is the mind composed of innate, domain-specialized modules?" with N=197. The most prominent bars are no response (~60), yes (~30) and yo (~30). The rest of the entries are funny, e.g. "I hope so."
kartikchandra.bsky.social
I'm excited to give a ~mysterious new talk~ at this very special SIGGRAPH workshop on art & cognitive science! See you soon in Vancouver. :)
yaelvinker.bsky.social
I'm very excited to announce our #SIGGRAPH2025 workshop:
Drawing & Sketching: Art, Psychology, and Computer Graphics 🎨🧠🫖

🔗 lines-and-minds.github.io
📅 Sunday, August 10th

Join us to explore how people draw, how machines draw, and how the two might draw together! 🤖✍️
kartikchandra.bsky.social
We've planned a fun, interactive session with something for everything: whether you're a curious beginner looking to get started, or a seasoned expert looking to push the frontiers of what's possible. There will be games, live-coding, flash talks, and more. Max, Dae and I can't wait to see you! :)
kartikchandra.bsky.social
If you're curious what memo is: it's a programming language specialized for social cognition and theory-of-mind. It lets you write models concisely, using special syntax like "Kartik knows" and "Max chooses," and it compiles to fast GPU code. Lots of CogSci '25 papers use memo! github.com/kach/memo
kartikchandra.bsky.social
As always, CogSci has a fantastic lineup of workshops this year. An embarrassment of riches!

Still deciding which to pick? If you are interested in building computational models of social cognition, I hope you consider joining @maxkw.bsky.social, @dae.bsky.social, and me for a crash course on memo!
cogscisociety.bsky.social
#Workshop at #CogSci2025
Building computational models of social cognition in memo

🗓️ Wednesday, July 30
📍 Pacifica I - 8:30-10:00
🗣️ Kartik Chandra, Sean Dae Houlihan, and Max Kleiman-Weiner
🧑‍💻 underline.io/events/489/s...
Promotional image for a #CogSci2025 workshop titled “Building computational models of social cognition in memo.” Organized and presented by Kartik Chandra, Sean Dae Houlihan, and Max Kleiman-Weiner. Scheduled for July 30 at 8:30 AM in room Pacifica I. The banner features the conference theme “Theories of the Past / Theories of the Future,” and the dates: July 30–August 2 in San Francisco.
kartikchandra.bsky.social
If you're here at RLDM today, you are invited to check out this exciting workshop on social cognition organized by Joe Barnby and Amrita Lamba! I'm giving a talk on programming languages for theory-of-mind at 11:30. Here's the schedule: sites.google.com/view/rldm202...

@rldmdublin2025.bsky.social
kartikchandra.bsky.social
(The *real* puzzle: how come our visual systems make such incredible inferences from shadows, and yet we barely notice such glaring inconsistencies unless we go looking for them? I just read Casati and Cavanagh's book "The Visual World of Shadows," which has made me fall in love with this question…)
kartikchandra.bsky.social
Here is a little inverse graphics puzzle: The no-parking sign on my side of Vassar Street casts its shadow in a dramatically different direction from the leafless tree on the far side of the street — almost 90º apart. How is that possible, given that the sun casts parallel rays? (Hints in thread.)
kartikchandra.bsky.social
Hi there! Thanks for the kind words.

I think everything should be linked from my homepage here: cs.stanford.edu/~kach/

And you can still subscribe to the RSS feed for new posts.
kartikchandra.bsky.social
I thought I would share these incantations in case anyone else is having the same realization today, and is up for an adventure. :)

See also: pandoc.org/MANUAL.html
kartikchandra.bsky.social
(1) First, you can convert your LaTeX to plain text like this:

pandoc --wrap=none main.tex -o main.txt

(2) But this does not automatically format references. For that, you can run this additional command:

pandoc --wrap=none --bibliography=refs.bib --citeproc main.txt | pandoc -t plain --wrap=none
kartikchandra.bsky.social
Last night, I realized that a workshop's submission portal only accepted plain text copy-pasted into a text box. Instead of manually de-TeX-ifying my beautifully-formatted PDF (5-minute job), I decided to work out how to do it automatically with pandoc (1-hour research project). Here's what I found…
kartikchandra.bsky.social
Yes! As one example of the impact SGI sponsorships have, I got the opportunity to attend SGI 2024 in Boston thanks to a generous industry sponsorship. I really hope other students can have that same opportunity in future years!
kartikchandra.bsky.social
(I don't think it was just that we students were hungry and/or easy to bribe. I think the professor's gesture made us feel like the class was a community of friends who were there for more than simply giving/getting grades. After that, showing up every day felt like the obvious natural thing to do!)
kartikchandra.bsky.social
Possibly-helpful anecdote: my undergrad complexity theory professor once went shopping for a new carpet at the mall. Next to the carpet store was a bakery, so he spontaneously decided to buy cupcakes for everyone in class the next day. From then on, he got near-perfect attendance for every lecture…
kartikchandra.bsky.social
There's much more to say about this—especially about why we think these ideas are important to graphics. For more, see our paper arxiv.org/abs/2409.13507 or Matt's upcoming talk at SIGGRAPH Asia.

In the meantime, enjoy a video where every sound effect is a vocal imitation produced by our method! :)
kartikchandra.bsky.social
When compared to actual vocal imitations produced by actual humans, our model predicts people's behavior quite well…

But we never "trained" our model on any dataset of human vocal imitations! Human-like imitations emerged simply from encoding basic principles of human communication into our model.
A bar plot and scatter plots showing a tight correlation between our method and humans.
kartikchandra.bsky.social
We designed a method for producing human-like vocal imitations of real-world sounds. It works by combining models of the human vocal tract (like "Pink Trombone"), human hearing (via feature extraction), and human communicative reasoning (the "Rational Speech Acts" framework from cognitive science).
A system diagram showing how the components of our system connect.