Katrina Rose Quinn
@mightyrosequinn.bsky.social
82 followers 140 following 14 posts
Neuroscientist in Tübingen & mother of dragons. Interested in visual perception, decision-making & expectations.
Posts Media Videos Starter Packs
Reposted by Katrina Rose Quinn
sampendu.bsky.social
Long time in the making: our preprint of survey study on the diversity with how people seem to experience #mentalimagery. Suggests #aphantasia should be redefined as absence of depictive thought, not merely "not seeing". Some more take home msg:
#psychskysci #neuroscience

doi.org/10.1101/2025...
Reposted by Katrina Rose Quinn
kathaschmack.bsky.social
Really enjoyed my weekend read on 𝐚𝐜𝐭𝐢𝐯𝐞 𝐟𝐢𝐥𝐭𝐞𝐫𝐢𝐧𝐠: local recurrence amplifies natural input patterns and suppresses stray activity. This review beautifully argues that sensory cortex itself is a site of memory and prediction. Food for thought on hallucinations!

#neuroskyence #neuroscience
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Among our speakers this year at #SNS2025 we have Marlene Cohen (@marlenecohen.bsky.social), from University of Chicago

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroskyence
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Among our speakers this year at #SNS2025 we have Floris de Lange (@predictivebrain.bsky.social)

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroskyence
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Among our speakers this year at #SNS2025 we have Tim Kietzmann (@timkietzmann.bsky.social)

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroskyence #compneurosky #NeuroAI
Reposted by Katrina Rose Quinn
mamassian.bsky.social
A nice shift in perceived colour between central and peripheral vision. The fixated disc looks purple while the others look blue.

The effect presumably comes from the absence of S-cones in the fovea.

From Hinnerk Schulz-Hildebrandt:
arxiv.org/pdf/2509.115...
An array of 9 purple discs on a blue background. Figure from Hinnerk Schulz-Hildebrandt.
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Among our speakers this year at #SNS2025 we also have Sylvia Schröder (@sylviaschroeder.bsky.social), from University of Sussex

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroskyscience
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
📢 Deadline extended! 📢

The registration deadline for #SNS2025 has been extended to Sunday, September 28th!

Register here 👉 meg.medizin.uni-tuebingen.de/sns_2025/reg...

PS: Students of the GTC (Graduate Training Center for Neuroscience) in Tübingen can earn 1 CP for presenting a poster! 👀
a woman in front of a white board with the words take your time written on it
ALT: a woman in front of a white board with the words take your time written on it
media.tenor.com
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Among our speakers this year at #SNS2025 we have Simone Ebert (@simoneebert.bsky.social) & Jan Lause (@janlause.bsky.social), from Hertie AI Institute, University of Tübingen

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroskyscience
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Next speaker to present is Arthur Lefevre, from University of Lyon

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...

#neuroscience #neuroskyscience
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
✨ Meet our speakers! ✨

Next speaker to present is Mara Wolter, PhD student at University of Tübingen in Yulia Oganian lab

Read the abstract here 💬 👇
meg.medizin.uni-tuebingen.de/sns_2025/abs...
Reposted by Katrina Rose Quinn
snstuebingen.bsky.social
🔵 Tübingen SNS2025 🔵

Over the next few days, we’ll be introducing you to the brilliant scientists who will be deliver their talks at #SNS2025!

✨ Get ready to meet our speakers! ✨

We are starting with Liina Pylkkänen, from New York University

💬 meg.medizin.uni-tuebingen.de/sns_2025/abs...
mightyrosequinn.bsky.social
Great work from a great team. Congrats guys!🎉
maxplanckcampus.bsky.social
Colors leave unique ‘maps’ in our brains, so consistent across people that Michael Bannert and Andreas Bartels (both MPI for Biological Cybernetics) could predict which color someone saw...just from fMRI data. 🌈🧠 Their findings point to deep evolutionary roots in how we perceive color.
mightyrosequinn.bsky.social
Not long to go now! For those of you who enjoy a more intimate conference with a chance to get to know your favourite speakers I would highly recommend this right here. Reach out if you have any questions :)
snstuebingen.bsky.social
🔵Tübingen SNS 2025🔵

Registration is still open for the #SNS2025 event on 6-7 October!

Join us for plenary lectures 🗣️, poster sessions📊and social events 👥 about system neuroscience!🧠

Registration at meg.medizin.uni-tuebingen.de/sns_2025
mightyrosequinn.bsky.social
Can't wait to see this fantastic line-up 🤩
siegellab.bsky.social
📢 Exciting News!

Tübingen Systems Neuroscience Symposium #SNS2025 will happen on 6️⃣-7️⃣ October! 🎉

Plenary lectures, poster sessions and social events with leading experts in the field 🧠

registration 👉 meg.medizin.uni-tuebingen.de/sns_2025

See you there! 👋
Reposted by Katrina Rose Quinn
mariamolinasan.bsky.social
Can humans use artificial limbs for body augmentation as flexibly as their own hands?
🚨 Our new interdisciplinary study put this question to the test with the Third Thumb (@daniclode.bsky.social), a robotic extra digit you control with your toes!
www.biorxiv.org/content/10.1...
🧵1/10
Reposted by Katrina Rose Quinn
maciekszul.bsky.social
🚨🚨🚨PREPRINT ALERT🚨🚨🚨
Neural dynamics across cortical layers are key to brain computations - but non-invasively, we’ve been limited to rough "deep vs. superficial" distinctions. What if we told you that it is possible to achieve full (TRUE!) laminar (I, II, III, IV, V, VI) precision with MEG!
Overview of the simulation strategy and analysis. a) Pial and white matter boundaries
surfaces are extracted from anatomical MRI volumes. b) Intermediate equidistant surfaces are
generated between the pial and white matter surfaces (labeled as superficial (S) and deep (D)
respectively). c) Surfaces are downsampled together, maintaining vertex correspondence across
layers. Dipole orientations are constrained using vectors linking corresponding vertices (link vectors).
d) The thickness of cortical laminae varies across the cortical depth (70–72), which is evenly sampled
by the equidistant source surface layers. e) Each colored line represents the model evidence (relative
to the worst model, ΔF) over source layer models, for a signal simulated at a particular layer (the
simulated layer is indicated by the line color). The source layer model with the maximal ΔF is
indicated by “˄”. f) Result matrix summarizing ΔF across simulated source locations, with peak
relative model evidence marked with “˄”. g) Error is calculated from the result matrix as the absolute
distance in mm or layers from the simulated source (*) to the peak ΔF (˄). h) Bias is calculated as the
relative position of a peak ΔF(˄) to a simulated source (*) in layers or mm.
mightyrosequinn.bsky.social
It's gotta be a Zelda playlist for me - those games trained me to problem-solve to that music 😆
Reposted by Katrina Rose Quinn
plosbiology.org
Rewarding animals to accurately report their subjective #percept is challenging. This study formalizes this problem and overcomes it with a #Bayesian method for estimating an animal’s subjective percept in real time during the experiment @plosbiology.org 🧪 plos.io/3HaxiuB
Two examples of how contextual information can bias visual perception. Top: Luminance illusion created by shadows (source: https://persci.mit.edu/gallery/checkershadow). Square B looks brighter than square A but has the same luminance, i.e., they have identical grayscale values in the picture. Bottom: Perception of object motion is biased by self-motion. The combination of leftward self-motion and up-left object motion in the world produces retinal motion that is up-right. If the animal partially subtracts the optic flow vector (orange dashed arrow) generated by self-motion (yellow arrow) from the image motion on the retina (black arrow), they may have a biased perception of object motion (red arrow) that lies between retinal and world coordinates (green arrow).
Reposted by Katrina Rose Quinn
cairosofie.bsky.social
🚨 New WP! 📄 "Publish or Procreate: The Effect of Motherhood on Research Performance" (w/ @valentinatartari.bsky.social
👩‍🔬👨‍🔬 We investigate how parenthood affects scientific productivity and impact — and find that the impact is far from equal for mothers and fathers.
mightyrosequinn.bsky.social
Press release on our new paper from @hih-tuebingen.bsky.social 🧠🥳
Link: www.nature.com/articles/s42...
Thread: bsky.app/profile/migh...
#neuroskyence #compneurosky #magnetoencephalography
hih-tuebingen.bsky.social
A new study led by Prof. Markus Siegel @siegellab.bsky.social, and first author Dr. Katrina Quinn @mightyrosequinn.bsky.social shows: The brain represents decisions abstracted from actions – even when they are tightly linked to specific actions.
👉 More information: tinyurl.com/4h38b3u7
Reposted by Katrina Rose Quinn
ml4science.bsky.social
We're super happy: Our Cluster of Excellence will continue to receive funding from the German Research Foundation @dfg.de ! Here’s to 7 more years of exciting research at the intersection of #machinelearning and science! Find out more: uni-tuebingen.de/en/research/... #ExcellenceStrategy
The members of the Cluster of Excellence "Machine Learning: New Perspectives for Science" raise their glasses and celebrate securing another funding period.