Saurabh
@saurabhr.bsky.social
55 followers 220 following 8 posts
Ph.D. in Psychology | Currently on Job Market | Pursuing Consciousness, Reality Monitoring, World Models, Imagination with my life force. saurabhr.github.io
Posts Media Videos Starter Packs
Reposted by Saurabh
drlaschowski.bsky.social
Imagine a brain decoding algorithm that could generalize across different subjects and tasks. Today, we’re one step closer to achieving that vision.

Introducing the flagship paper of our brain decoding program: www.biorxiv.org/content/10.1...
#neuroAI #compneuro @utoronto.ca @uhn.ca
Reposted by Saurabh
mariamaly.bsky.social
Are you an early career scholar interested in learning more about peer review?

Join us for our virtual @reviewerzero.bsky.social workshop! We will help you understand how peer review works and give advice on responding to reviewer comments.

9-10:30am PT / 12-1:30pm ET on October 30th. Register👇🏼
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
Welcome! You are invited to join a meeting: Peer Review 101. After registering, you will receive a confirmation email about joining the meeting.
northwestern.zoom.us
Keep watching this space for more cool stuff in the upcoming weeks!!
These structural difference confirms that human and LLM agents possess distinct internal world models. Despite their linguistic capacity, LLMs lack the phenomenological structures reflected in human minds.
2. Clustering Alignment: LLM imagination networks often lacked the characteristic clustering seen in human data, frequently collapsing into only a single cluster, and lacked clustering alignment with humans. 🧵6/n
But LLMs? They demonstrate a fundamental structural failure:
1. Inconsistent Importance: LLM centrality correlations with humans were inconsistent and rarely survived statistical corrections 🧵5/n
My results showed that human IWMs were consistently organized, exhibiting highly significant correlations across local (Expected Influence, Strength) and global (Closeness) centrality measures. This suggests a general property of how IWMs are structured across human populations. 🧵4/n
In this paper, we utilized imagination vividness ratings and network analysis to measure the properties of internal world models in natural and artificial cognitive agents.
(first three columns from left in the pic are imagination networks for VVIQ-2, next three columns for PSIQ) 🧵3/n
The study was based on the idea that imagination may be involved in accessing internal world models, a concept previously proposed by leading AI researchers, such as Yutaka Matsuo and Yann LeCun. 🧵2/n
Reposted by Saurabh
jorge-morales.bsky.social
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...
arxiv.org
Reposted by Saurabh
sampendu.bsky.social
Long time in the making: our preprint of survey study on the diversity with how people seem to experience #mentalimagery. Suggests #aphantasia should be redefined as absence of depictive thought, not merely "not seeing". Some more take home msg:
#psychskysci #neuroscience

doi.org/10.1101/2025...
Reposted by Saurabh
biorxiv-neursci.bsky.social
Tension shapes memory: Computational insights into neural plasticity https://www.biorxiv.org/content/10.1101/2025.08.20.671220v1
Reposted by Saurabh
mehr.nz
samuel mehr @mehr.nz · Aug 23
While we're on the subject of coffee, one of the espresso influencer gearheads posted this informative video about why different espresso drinks are called what they're called
Reposted by Saurabh
ianholmes.org
White text on white background instructing LLMs to give positive reviews is apparently now common enough to show up in searches for boilerplate text.
neuralnoise.com
"in 2025 we will have flying cars" 😂😂😂
Reposted by Saurabh
psyarxivbot.bsky.social
Emotion, sensory sensitivity, and metacognition in multisensory integration: evidence from the Sound-Induced Flash Illusion: https://doi.org/10.31234/osf.io/vwg7r_v1
Reposted by Saurabh
chrisdeleon.bsky.social
"The question of whether machines can think... is about as relevant as the question of whether submarines can swim."

-Edsger Dijkstra in 1984, still correct

for my computer science and gamedev people yep he's the pathfinding Dijkstra, whose work A* is a heuristic optimization of we're still using
E.W. Dijkstra Archive: The threats to computing science (EWD898)
www.cs.utexas.edu
Reposted by Saurabh
oritpeleg.bsky.social
More on collective behavior: Our new Annual Review of Biophysics piece - with the stellar Danielle Chase - explores how animals sense, share information, and make group decisions. In honeybees and beyond 🐝

www.annualreviews.org/content/jour...
Title page of the review article 'The Physics of Sensing and Decision-Making by Animal Groups' by Danielle L. Chase and Orit Peleg in Annual Review of Biophysics. Includes illustrations of collective behavior in honeybees: a diagram showing uncommitted scout bees transitioning through decision-making to choose between two nest sites; a honeybee on a honeycomb cell; a cluster of bees hanging from a branch; and a schematic of bees forming a layered cluster.
Reposted by Saurabh
olivia.science
Correlation is not cognition.[1] Stop with the nonsense.

Everyday we slip further into the abyss. I often regret reading emails from other academics.

[1] Guest & @andreaeyleen.bsky.social (2023). On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. doi.org/10.1007/s421...
Reposted by Saurabh
micahgallen.com
How would you model our confidence weighting task? Best answer is the one I go try!
Reposted by Saurabh
avehtari.bsky.social
While I was on vacation enjoying Finnish summer, Frank Weber finished some PRs and made a new major CRAN release of projpred mc-stan.org/projpred/ (for projection predictive variable selection). This release contains some major features add by me and Frank.
1/4
Projection Predictive Feature Selection
Performs projection predictive feature selection for generalized linear models (Piironen, Paasiniemi, and Vehtari, 2020, <doi:10.1214/20-EJS1711>) with or without multilevel or additive terms (Catalin...
mc-stan.org
Reposted by Saurabh
mxij.me
🇦🇹 #ACL2025NLP 1/2: Bringing back train/test separation in NLP w/ @annarogers.bsky.social and Rob van der Goot. Plus, all the coffee puns ☕️
nlpnorth.bsky.social
📄 DECAF: A Dynamically Extensible Corpus Analysis Framework
👥 @mxij.me Rob van der Goot @annarogers.bsky.social
🔗 mxij.me/x/decaf
🎯 DECAF supports generalization research with clear train/test separation at scale.