Ryan Smith
@rssmith.bsky.social
370 followers 220 following 65 posts
Associate Professor at the Laureate Institute for Brain Research. My lab focuses on computational neuroscience and psychiatry, emotion-cognition interactions, prospective planning, exploration, and interoception.
Posts Media Videos Starter Packs
Reposted by Ryan Smith
skhalsa.bsky.social
Even after recovery, relapse is heartbreakingly common in anorexia nervosa. Could the answer lie in the gut’s hidden signals? 🧵
rssmith.bsky.social
Big congrats to Marishka on this. I know how much time and effort went into it. Really happy to see it out! For those interested, she replicated prior longitudinal results using an approach-avoidance conflict task and showed how model parameters afforded out-of-sample clinical prediction.
cpsyjournal.bsky.social
New paper in CPsy - 'Computational Mechanisms of Approach-Avoidance Conflict Predictively Differentiate Between Affective and Substance Use Disorders' from Marishka Mehta and the group of @rssmith.bsky.social
doi.org/10.5334/cpsy...
Computational Mechanisms of Approach-Avoidance Conflict Predictively Differentiate Between Affective and Substance Use Disorders | Computational Psychiatry
doi.org
rssmith.bsky.social
But I’ve seen other papers also use appraisal = conceptualization/interpretation of interoceptive sensations. And clearly our interpretations of emotions can also be appraised and generate other emotions (eg, being frustrated that you are sad), so there’s lots of circular inference and overlap
rssmith.bsky.social
to how that term has been used in causal appraisal theories of emotion, where this means evaluation of one’s situation in the world along various dimensions (goal congruency, value consistency, etc) and that generate affective responses accordingly (ie, before they could be felt and interpreted).
rssmith.bsky.social
Ya, that’s more or less exactly what I think. There’s jargon issues though. In papers with Richard we always talked about mapping body state representations to concepts (eg, interpreting heart palpitations as a feeling of panic or symptom of a heart attack). We tried to keep “appraisal” restricted..
rssmith.bsky.social
Agreed. The video examples are super interesting. I wish the questions were asked in a more controlled way, but I can’t fault for some limitations of this rarely possible type of work. It seemed quite somatic and abrupt. No spontaneous descriptions of emotion proper either. Only when given a word.
nicolecrust.bsky.social
(buried in a reply, but) This is mind-blowing & deserves a post. Scroll to "video" in this article to watch what happens when the human hypothalamus is stimulated: it's a combo of embarrassment/shame and intense body sensations emanating from the heart. WOW.

www.brainstimjrnl.com/article/S193...
Complex negative emotions induced by electrical stimulation of the human hypothalamus
Stimulation of the ventromedial hypothalamic region in animals has been reported to cause attack behavior labeled as sham-rage without offering information about the internal affective state of the an...
www.brainstimjrnl.com
rssmith.bsky.social
But I suppose I should be clear that interception task performance does still seem affected in multiple disorders. So it is likely still relevant to psychopathology, even if not via direct impact on emotion itself.
rssmith.bsky.social
I’ll be interested to see what you have in mind. I’m also skeptical that detection accuracy for things like heartbeats has much to do with emotion. But I think feeling the sensation of a racing heart or other internal sensations and interpreting their meaning is strongly linked to emotion.
rssmith.bsky.social
Sure, that all seems reasonable. I think it wouldn’t be stable unless the right regularities are present between actions and observations (esp in development). But barring that, I guess I’m just prone to generalize because I can’t see why some experiences should be privileged over others
rssmith.bsky.social
for psychology anything like we would find intuitive. But if you have any arguments you find convincing re brain stim induced experience, phantom limb etc that would still make an actual biological body necessary I’m all ears. The material just feels arbitrary, other than actual chem properties
rssmith.bsky.social
Haha. Ya, for whatever reason I have the other bias. Like things like phantom limb, hallucinations, the ability to induce experiences with direct stimulation of the brain, etc etc, just convince me that the actual cause of a signal isn’t required. But I think the *as if* part is probably crucial…
rssmith.bsky.social
then we’re back to something about having the right computational architecture to control a body like ours in the way we do.
rssmith.bsky.social
Well I think one argument could start from a standard brain-in-a-vat (or matrix-style) premise. We know empirically that stimulating the brain or nerve inputs directly is sufficient to induce experience. So it follows that the brain only needs input signals *as if* it has a body. And if that’s true
rssmith.bsky.social
computational (relation to inference, predictive control, etc.) leads me right back to some form of functionalism.
rssmith.bsky.social
After all, there’s lots of carbon-based structures we definitely don’t think have mental properties. So then it needs to be about structure and dynamics, and the relevant structure and dynamics could (in principle) be realized by non-carbon systems. That + the clear relation between mental and…
rssmith.bsky.social
So then I think we’re right back to having the right computational architecture needed for independently controlling and maintaining a body and that assigns high value to doing so. Otherwise it seems like the argument is for some kind of “carbon essentialism”, which feels unmotivated.
rssmith.bsky.social
It will have to maintain optimal energy levels, temperature levels, etc. to keep itself functioning just like any evolved system. This would benefit from having a generative model of those processes that predict future changes in those levels, supporting internal planning, and so forth.
rssmith.bsky.social
Sorry, I missed a couple of the posts above before sending those last 2 messages. I’m with you on much of that. But I think what “alive” means becomes the main thing. I think the second you build a robot that’s self-maintaining, you have clear starting points for homeostasis, for example.
rssmith.bsky.social
if those are the basic options on the table, I think there’s clear reasons (at least convincing to me) that some type of functionalism is most plausible to bet on.
rssmith.bsky.social
Those are both naturalistic positions, which I’m scientifically committed to. But then there’s panpsychist views (all matter has some kind of mental aspect, even single particles) and dualist positions (mind is not implemented by physical stuff). I’m sure there’s others, and subtypes of each. But…
rssmith.bsky.social
Sure. But we should also be clear about the options. There’s functionalism (mental phenomena are specific types of computations), which I’m advocating. There’s biological identity positions (mind requires implementation with lipids, proteins, etc., above and beyond the computations they implement).
rssmith.bsky.social
These are just examples of clues to follow. All I’m saying is that we know some control architecture exists that has the right properties. It’s just a current puzzle to figure out what the necessary and sufficient conditions are for it.
rssmith.bsky.social
scenarios and use that in model-based planning. Embodiment would ground multiple dynamically evolving needs to continuously track and prioritize to maintain long-run homeostasis. We know it’s a limited capacity system with serial processes, somehow attached to a massively parallel system, etc.
rssmith.bsky.social
For example, it seems reasonable to expect the system will encode a generative model of its environment, including its body, reflecting multiple temporal scales that allow for retrospection and prospective control. It would likely require the capacity for internal simulation of counterfactual…
rssmith.bsky.social
If we’re talking about extant artificial systems I agree. At the same time, the brain has a physical control architecture, which we know does feel. We just need to figure out what that architecture is. I think there are plenty of clues to work from, with much more than simple value signals.