Paul Sharp
@paulbsharp.bsky.social
1K followers 290 following 200 posts
Assistant professor of psychology, Bar-Ilan University | computational cognitive science & psychiatry "Discovery happens less when you're trying to be the expert and more when you're trying to be the learner." - Itai Yanai Website: sharplabbiu.github.io
Posts Media Videos Starter Packs
Pinned
paulbsharp.bsky.social
🚨 Clinicians have noted for decades how planning & anxiety are linked. Yet, computational psychiatry thus far failed to show how. Here, I explain that we need to broaden how we model planning to reveal its *biases* in chronic anxiety. A 🧵 on the framework 1/n

authors.elsevier.com/a/1kAJC4sIRv...
Reposted by Paul Sharp
markkho.bsky.social
I'm recruiting grad students!! 🎓

The CoDec Lab @ NYU (codec-lab.github.io) is looking for PhD students (Fall 2026) interested in computational approaches to social cognition & problem solving 🧠

Applications through Psych (tinyurl.com/nyucp) are due Dec 1. Reach out with Qs & please repost! 🙏
codec lab
codec-lab.github.io
Reposted by Paul Sharp
eraneldar.bsky.social
Happy to share our new work showing how social emotions such as anger and gratitude establish an interindividual form of actor-critic learning, which leads to the emergence of norms in groups of interacting individuals.

Now published at @apajournals.bsky.social: psycnet.apa.org/record/2026-...
Reposted by Paul Sharp
joshcjackson.bsky.social
🚨New preprint🚨

osf.io/preprints/ps...

In a sample of ~2 billion comments, social media discourse becomes more negative over time

Archival and experimental findings suggest this is a byproduct of people trying to differentiate themselves

Led by @hongkai1.bsky.social in his 1st year (!) of his PhD
paulbsharp.bsky.social
Looks super cool, looking forward to reading.
zachrosenthal.bsky.social
Super proud of this collaboration with rockstar Ryan Raut - born out of playing in the sandbox in our last year of grad school! Multi-scale brain activity can be predicted from a simple measure of arousal like pupil diameter. Out with linear causality, in with dynamic systems to explain neurobiology
Arousal as a universal embedding for spatiotemporal brain dynamics - Nature
Reframing of arousal as a latent dynamical system can reconstruct multidimensional measurements of large-scale spatiotemporal brain dynamics on the timescale of seconds in mice.
www.nature.com
Reposted by Paul Sharp
ondrejzika.bsky.social
🚨 I am over the moon 🌓 to announce that I am joining University College Dublin @ucddublin.bsky.social as an Assistant Professor this fall to start the Uncertain Mind (UMI) lab 💫

I am looking for PhD/Postdoc candidates to join (more below 👇 ). Please RT as the deadline is pretty soon 🙏
paulbsharp.bsky.social
This problem pervades many areas

Kozak and Miller 1982 have a great paper on this: "Hypothetical constructs versus intervening variables: A re-appraisal of the three-systems model of anxiety assessment"

psycnet.apa.org/record/1983-...
Reposted by Paul Sharp
malcolmgcampbell.bsky.social
🚨Our preprint is online!🚨

www.biorxiv.org/content/10.1...

How do #dopamine neurons perform the key calculations in reinforcement #learning?

Read on to find out more! 🧵
Reposted by Paul Sharp
tobigerstenberg.bsky.social
🚨 NEW PREPRINT: Multimodal inference through mental simulation.

We examine how people figure out what happened by combining visual and auditory evidence through mental simulation.

Paper: osf.io/preprints/ps...
Code: github.com/cicl-stanfor...
Reposted by Paul Sharp
saurabhbedi.bsky.social
📢 Preprint out! biorxiv.org/content/10.1... What gives rise to probability weighting, a cornerstone of Prospect Theory?
We show it comes from the natural boundedness of probabilities + cognitive noise. Adding boundaries adds multiple distortions, across risky choice & perception.
Probability weighting arises from boundary repulsions of cognitive noise
In both risky choice and perception, people overweight small and underweight large probabilities. While prospect theory models this with a probability weighting function, and Bayesian noisy coding mod...
biorxiv.org
Reposted by Paul Sharp
pf-hitchcock.bsky.social
Now out in JEP: General, "How working memory and reinforcement learning interact when avoiding punishment and pursuing reward concurrently"

psycnet.apa.org/record/2026-...

Preprint with final version: osf.io/preprints/ps...

1/n
Reposted by Paul Sharp
Reposted by Paul Sharp
annamai.bsky.social
Out now in Neuroscience & Biobehavioral Reviews!

When studying language in the brain, we often look for things that can be model systems for language (songbirds, artificial grammars, etc.). Here, we flip this on its head and argue that language itself is an excellent model system for cognition 🗣️🧏‍♀️🧠
paulbsharp.bsky.social
"spontaneous thought" is only epistemically spontaneous. once meta-control and offline learning/planning/foraging models improve, I'm looking forward to hearing more about the less sexy but still very important "thought".
Reposted by Paul Sharp
alexandrapike.bsky.social
Hello hivemind! I haven't seen one of these in a while but I know that back-in-the-day they were all the rage: does anyone have a funky spreadsheet of MH/psych/neuro-relevant grants/fellowships for ECRs that they wouldn't mind me sharing with my lab?
a donkey is standing on a dirt field and asking for something .
ALT: a donkey is standing on a dirt field and asking for something .
media.tenor.com
paulbsharp.bsky.social
A CBT therapist reached out to me for a copy of my paper on planning and anxiety. It's such a rewarding feeling that this computational work reaches therapists, and that it's written in a way that at least for some is inviting!
Reposted by Paul Sharp
markkho.bsky.social
The TiCS issue featuring our paper on "A timeline of cognitive costs in decision-making" is now available online 😄

Honored to have been a part of this awesome interdisciplinary mega-collab led by Christin Schulze (UNSW Sydney)

www.cell.com/trends/cogni...
A timeline of cognitive costs in decision-making
Recent research from economics, psychology, cognitive science, computer science, and marketing is increasingly interested in the idea that people face cognitive costs when making decisions. Reviewing ...
www.cell.com
paulbsharp.bsky.social
🚨 Want to research the computational & neural mechanisms of planning and its disruption in mental health? If so, join our lab!

Here's one prestigious postdoc fellowship that just opened: azrielifoundation.org/azrieli-fell...

reach out w/your CV to [email protected]

lab: sharplabbiu.github.io
Lab Website
sharplabbiu.github.io
Reposted by Paul Sharp
Check out @tifenpan.bsky.social 's just published paper! we demonstrate how to use RNNs to infer latent variables from cognitive models, even when standard methods don't work easily.
paulbsharp.bsky.social
Of course! When our lab kicks off the new year, this will be the first article we read. Such a nice way to introduce statistical modelling and interpretation to new members before they're indoctrinated by status quo, limited approaches. I enjoyed learning more, too!
paulbsharp.bsky.social
This is such a great article!

"We also illustrate how to find out whether an effect is practically equivalent to a previously reported effect"

we need more of these tests to build a cumulative science!
dingdingpeng.the100.ci
Ever stared at a table of regression coefficients & wondered what you're doing with your life?

Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities

Abstract
Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).
Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve. A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals).

Illustrated are 
1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals
2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and
3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.
Reposted by Paul Sharp
dingdingpeng.the100.ci
Ever stared at a table of regression coefficients & wondered what you're doing with your life?

Very excited to share this gentle introduction to another way of making sense of statistical models (w @vincentab.bsky.social)
Preprint: doi.org/10.31234/osf...
Website: j-rohrer.github.io/marginal-psy...
Models as Prediction Machines: How to Convert Confusing Coefficients into Clear Quantities

Abstract
Psychological researchers usually make sense of regression models by interpreting coefficient estimates directly. This works well enough for simple linear models, but is more challenging for more complex models with, for example, categorical variables, interactions, non-linearities, and hierarchical structures. Here, we introduce an alternative approach to making sense of statistical models. The central idea is to abstract away from the mechanics of estimation, and to treat models as “counterfactual prediction machines,” which are subsequently queried to estimate quantities and conduct tests that matter substantively. This workflow is model-agnostic; it can be applied in a consistent fashion to draw causal or descriptive inference from a wide range of models. We illustrate how to implement this workflow with the marginaleffects package, which supports over 100 different classes of models in R and Python, and present two worked examples. These examples show how the workflow can be applied across designs (e.g., observational study, randomized experiment) to answer different research questions (e.g., associations, causal effects, effect heterogeneity) while facing various challenges (e.g., controlling for confounders in a flexible manner, modelling ordinal outcomes, and interpreting non-linear models).
Figure illustrating model predictions. On the X-axis the predictor, annual gross income in Euro. On the Y-axis the outcome, predicted life satisfaction. A solid line marks the curve of predictions on which individual data points are marked as model-implied outcomes at incomes of interest. Comparing two such predictions gives us a comparison. We can also fit a tangent to the line of predictions, which illustrates the slope at any given point of the curve. A figure illustrating various ways to include age as a predictor in a model. On the x-axis age (predictor), on the y-axis the outcome (model-implied importance of friends, including confidence intervals).

Illustrated are 
1. age as a categorical predictor, resultings in the predictions bouncing around a lot with wide confidence intervals
2. age as a linear predictor, which forces a straight line through the data points that has a very tight confidence band and
3. age splines, which lies somewhere in between as it smoothly follows the data but has more uncertainty than the straight line.