RT Pramod
@rtpramod.bsky.social
140 followers 250 following 14 posts
Postdoc at MIT. Cognitive Neuroscience.
Posts Media Videos Starter Packs
Reposted by RT Pramod
wanghaoliang.bsky.social
Can you tell if a tower will fall or if two objects will collide — just by looking? 🧠👀 Come check out my #CogSci2025‪ poster (P1-W-207) on July 31, 13:00–14:15 PT to learn how people do general-purpose physical reasoning from visual input!
rtpramod.bsky.social
Good question! We haven't tested these cases you've mentioned but Jason Fischer's 2016 paper found that PN doesn't respond strongly to social prediction (on hieder and simmel-like displays)
rtpramod.bsky.social
We have started to look in the cerebellum. It is still early days so keep an eye out for updates in the future!
rtpramod.bsky.social
Thanks to my co-authors and all the people who gave constructive feedback over the course of this project! Special shout out to Kris Brewer for shooting the videos used in Experiment 1 and @georginawooxy.bsky.social for her deep neural network expertise.

(12/12)
rtpramod.bsky.social
Our findings show that PN has abstract object contact information and provide the strongest evidence yet that PN is engaged in predicting what will happen next. These results open many new avenues of investigation into how we understand, predict, and plan in the physical world

(11/n)
rtpramod.bsky.social
Our main results are i) not present in the ventral temporal cortex, ii) not present in the primary visual cortex -- i.e, our stimuli were unlikely to have low-level visual confounds and iii) are replicable with different analysis criteria & methods. See paper for details.

(10/n)
rtpramod.bsky.social
Short answer: Yes! Using MVPA we found that the PN has information about predicted contact events (i.e., collisions). This was true not only within a scenario (the ‘roll’ scene above), but also generalized across scenarios indicating the abstractness of representation.

(9/n)
rtpramod.bsky.social
That is,
(8/n)

When we see this: Does the PN predict this?
rtpramod.bsky.social
In our second pre-registered fMRI experiment, we tested the central tenet of the ‘physics engine’ hypothesis – that the PN runs forward simulations to predict what will happen next. If true, PN should contain information about predicted future states before they occur.

(7/n)
rtpramod.bsky.social
Given their importance for prediction, we hypothesized that the PN would encode object contact. In our first pre-registered fMRI experiment, we used multi-voxel pattern analysis (MVPA) and found that only PN carried scenario-invariant information about object contact.

(6/n)
rtpramod.bsky.social
If a container moves, then so does its containee, but the same is not true of an object that is merely occluded by the container without contacting it!

(5/n)
rtpramod.bsky.social
However, there was no evidence for such predicted future state information in the PN. We realized that object-object contact is an excellent way to test the Physics Engine hypothesis. When two objects are in contact, their fate is intertwined:

(4/n)
rtpramod.bsky.social
These results have led to the hypothesis that the Physics Network (PN) is our brain’s ‘Physics Engine’ – a generative model of the physical world (like those used in video games) capable of running simulations to predict what will happen next.

(3/n)
rtpramod.bsky.social
How do we understand, plan and predict in the physical world? Prior research has implicated fronto-parietal regions of the human brain (the ‘Physics Network’, PN) in physical judgement tasks, including in carrying representations of object mass & physical stability.

(2/n)
Reposted by RT Pramod
emaliemcmahon.bsky.social
I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.

osf.io/preprints/ps...

#CogSci #EEG
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.
Reposted by RT Pramod
kosakowski.bsky.social
When you see this image, does it make you wonder what that baby is thinking. Do you think the baby is merely perceiving a set of shapes or do you think that the baby is also inferring meaning from the face they are looking at? (1/5)
Video of a baby on its parent's chest looking at the parent's face and smiling.
Reposted by RT Pramod
sparuniisc.bsky.social
In a study now out in @eLife, @GeorginJacob @PramodRT9 and I have some exciting results: a novel computation that helps the brain solve disparate visual tasks, a novel brain region that performs this computation....what's not to like?! Read on.... 1/n
elifesciences.org/articles/93033
Visual homogeneity computations in the brain enable solving property-based visual tasks
Seemingly disparate property-based tasks (oddball search, same-different and symmetry) are solved by computing a novel image property, visual homogeneity, which is localized to the object selective co...
elifesciences.org
Reposted by RT Pramod
lauraknelson.bsky.social
Academics - where are academic jobs posted for non-UK non-North American countries? If you were looking for jobs in, say, the Nordic countries, or Australia, where do you look? Asking for all the PhDs who are on the market this year. (Pls no April fools jokes, their nerves are frayed as it is)
Reposted by RT Pramod
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com