Maria Ryskina
@mryskina.bsky.social
74 followers 120 following 15 posts
Postdoc @vectorinstitute.ai | organizer @queerinai.com | previously MIT, CMU LTI | 🐀 rodent enthusiast | she/they 🌐 https://ryskina.github.io/
Posts Media Videos Starter Packs
Pinned
mryskina.bsky.social
Interested in language models, brains, and concepts? Check out our COLM 2025 🔦 Spotlight paper!

(And if you’re at COLM, come hear about it on Tuesday – sessions Spotlight 2 & Poster 2)!
Paper title: Language models align with brain regions that represent concepts across modalities.
Authors:  Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko. 
Affiliations: Maria is affiliated with the Vector Institute for AI, but the work was done at MIT. All other authors are affiliated with MIT. 
Email address: maria.ryskina@vectorinstitute.ai.
Reposted by Maria Ryskina
colmweb.org
Outstanding paper 🏆 1: Fast Controlled Generation from Language Models with Adaptive Weighted Rejection Sampling
openreview.net/forum?id=3Bm...
Reposted by Maria Ryskina
nsaphra.bsky.social
How can an imitative model like an LLM outperform the experts it is trained on? Our new COLM paper outlines three types of transcendence and shows that each one relies on a different aspect of data diversity. arxiv.org/abs/2508.17669
Reposted by Maria Ryskina
gretatuckute.bsky.social
Check out @mryskina.bsky.social's talk and poster at COLM on Tuesday—we present a method to identify 'semantically consistent' brain regions (responding to concepts across modalities) and show that more semantically consistent brain regions are better predicted by LLMs.
mryskina.bsky.social
Interested in language models, brains, and concepts? Check out our COLM 2025 🔦 Spotlight paper!

(And if you’re at COLM, come hear about it on Tuesday – sessions Spotlight 2 & Poster 2)!
Paper title: Language models align with brain regions that represent concepts across modalities.
Authors:  Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko. 
Affiliations: Maria is affiliated with the Vector Institute for AI, but the work was done at MIT. All other authors are affiliated with MIT. 
Email address: maria.ryskina@vectorinstitute.ai.
mryskina.bsky.social
Also, if you're at COLM, come to Gillian's keynote and find out what our lab is working on!
colmweb.org
Keynote spotlight #4: the second day of COLM will close with @ghadfield.bsky.social from JHU talking about human society alignment, and lessons for AI alignment
mryskina.bsky.social
We also compare the representational geometries of the models and the brain using RSA, and find significant alignment in all models. For VLMs, it further increases when both text and image stimuli are used, especially in the ventral ROI:
Title: 5. Representational similarity analysis.

Left: RSA diagram. Stimuli (word clouds, pictures, and sentences for concepts Help, Bird, and Broken) are encoded first via LM hidden states and then via brain activation. Each set of representations is then converted into a distance matrix showing how dissimilar each pair of concepts is in this representational space. Finally, a correlation is computed between the distance matrices.

Right: 3 bar charts of RSA correlations, labeled ROI 1, 2, and 3. Each chart includes two groups of bars labeled LMs and VLMs. Both groups include bars for Permuted baseline and for Result (text only). The VLMs group includes an additional bar for Result (text + images). All Result bars are significantly higher than the Permuted baseline bars. Result (text only) bars are comparable between LMs and VLMs. For VLMs, Result (text + image) is higher than Result (text only) in ROI 1 and especially ROI 3, but comparable in ROI 2.
mryskina.bsky.social
2. Within our ROIs, LM predictivity is correlated with semantic consistency, even where response to meaningful language (preference for sentences over non-words) is low:
3 rows and 6 columns of line charts. Rows are labeled ROI 1, 2, and 3. Pairs of columns are labeled Sentence paradigm, Picture paradigm, or Word cloud paradigm. The X axis is Predictivity. Odd columns are labeled Controlled for language, Y axis is Quartile (cons.), and the chart lines are red. The red lines all trend upwards.  Even columns are labeled Controlled for consistency, Y axis is Quartile (lang.), and the chart lines are blue. The blue lines show a moderate to strong upward trend for ROI 1 and 3 in Sentence and Word cloud paradigms but no clear trend elsewhere.
mryskina.bsky.social
In our brain encoding experiments (using LM representations to predict brain responses), we find that:

1. Across the whole brain, areas with higher semantic consistency – plausibly representing concepts across modalities – are better predicted by LMs:
Title: 4. LMs for brain encoding.

Left: Brain encoding diagram. Stimulus (He hit the ball out of the field) is encoded first via LM hidden states and then via brain activation. Then a regression model is used to predict the brain activation from LM hidden states.

Right: Scatter plots of brain encoding performance for the Sentence, Picture, and Word cloud paradigms. Y axis is Mean LM predictivity. X axis is Semantic consistency. Each point corresponds to a Glasser anatomical area, with areas within ROI 1, 2, or 3 specially marked. A line shows a positive trend in each plot. A cluster of points is highlighted above the line in the Picture paradigm plot.
mryskina.bsky.social
…we define semantic consistency – a measure of how consistently a brain area responds to the same concepts across paradigms – and use it to identify three regions of interest (ROI) where semantically consistent voxels are found:
Left: figure titled 2. Semantic consistency metric. Subtitle: Correlation between brain responses to concepts across stimuli formats. Three columns of squares represent three vectors, one square per element, with top and bottom elements labelled Concept 1 and Concept 180 respectively. Vectors are labelled Beta with subscript S, P, or WC for paradigm. Each pair of vectors is connected by an arc labelled R for Pearson correlation. From each R, a line leads to a circle with C in it.

Right: figure titled: 3. Consistent brain areas. 
Top: heatmap of the left hemisphere titled Probabilistic map. The jet colormap ranges from blue for 0% to red for 30%. The heatmap shows higher values in the posterior temporal, inferior frontal, and ventrotemporal areas.
Bottom: a segmentation of the left hemisphere titled Regions of interest. Posterior temporal, inferior frontal, and ventrotemporal areas are colored in pink (labeled ROI 1), blue (ROI 2), and green (ROI 3) respectively.
mryskina.bsky.social
Using an fMRI dataset of brain responses to stimuli that encode concepts in different paradigms (sentences, pictures, word clouds) and modalities…
Figure title: 1. Brain responses to concepts (Pereira et al., 2018, Exp. 1). 

Left: an icon of a human brain, captioned n=17, for the number of participants. and an icon of an MRI scanner, captioned fMRI.

Right: three boxes connected with arrows, representing three experimental paradigms. Each box includes text: 180 concepts × 4–6.
The first box is yellow, titled Sentences and contains a sentence: He hit the ball out of the field. The word ball is highlighted in bold.
The second box is purple, titled Pictures and contains a picture of a brown snake coiled on the ground, with the caption Dangerous highlighted in bold. 
The third box is green, titled Word clouds and contains a word cloud with the word Bear in the middle highlighted in bold and surrounded by related words: Squirrel, Duck, Deer, Wolf, Raccoon.
mryskina.bsky.social
Interested in language models, brains, and concepts? Check out our COLM 2025 🔦 Spotlight paper!

(And if you’re at COLM, come hear about it on Tuesday – sessions Spotlight 2 & Poster 2)!
Paper title: Language models align with brain regions that represent concepts across modalities.
Authors:  Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko. 
Affiliations: Maria is affiliated with the Vector Institute for AI, but the work was done at MIT. All other authors are affiliated with MIT. 
Email address: maria.ryskina@vectorinstitute.ai.
Reposted by Maria Ryskina
queerinai.com
Attending COLM next week in Montreal? 🇨🇦 Join us on Thursday for a 2-part social! ✨ 5:30-6:30 at the conference venue and 7:00-10:00 offsite! 🌈 Sign up here: forms.gle/oiMK3TLP8ZZc...
Queer in AI @ COLM 2025. Thursday, October 9 5:30 to 10 pm Eastern Time. There is a QR code to sign up which is linked in the post.
mryskina.bsky.social
Queer in AI is planning a social, hope to see you there 💜
mryskina.bsky.social
I was too for a second there :/
mryskina.bsky.social
I know the news is unhinged these days, but isn't it a parody account?
Reposted by Maria Ryskina
pranav-nlp.bsky.social
Hi all, we are organizing this fun affinity workshop at EurIPS in Copenhagen. Feel free to present any of your previously published works before or submit abstracts / mixed-media materials!
queerinai.com
Hey everyone! 👋 We're so excited to be organizing a fun Queer in AI Workshop, co-located with EurIPS in Copenhagen on December 5!

We are seeking submissions, with a deadline of October 15. #NeurIPS2025
A promotional poster for the "Queer in AI Workshop". The title is in large, colorful, textured letters. Text below reads: "Co-located with EurIPS in Copenhagen (December 5). Deadline: October 15." A bulleted list of accepted submissions includes published works, upcoming works, mixed media like art and videos, and abstracts. The website queerinai.com/eurips-2025 is listed at the bottom.
Reposted by Maria Ryskina
lasha.bsky.social
Super super thrilled that HALoGEN, our study of LLM hallucinations and their potential origins in training data, received an ✨Outstanding Paper Award✨ at ACL!

Joint work w/i Shrusti Ghela*, David Wadden, and Yejin Choi

bsky.app/profile/lash...
lasha.bsky.social
We are launching HALoGEN💡, a way to systematically study *when* and *why* LLMs still hallucinate.

New work w/ Shrusti Ghela*, David Wadden, and Yejin Choi 💫

📝 Paper: arxiv.org/abs/2501.08292
🚀 Code/Data: github.com/AbhilashaRav...
🌐 Website: halogen-hallucinations.github.io 🧵 [1/n]
Reposted by Maria Ryskina
kanishka.bsky.social
Looking forward to attending #cogsci2025 (Jul 29 - Aug 3)! I’m especially excited to meet students who will be applying to PhD programs in Computational Ling/CogSci in the coming cycle.

Please reach out if you want to meet up and chat! Email is the best way, but DM also works if you must!

quick🧵:
Placeholders for 3 students (number arbitrarily chosen) and me - to signify my eventual group!
mryskina.bsky.social
Join Abhilasha's lab, she is an awesome researcher and mentor! I can attest, being her collaborator was great fun 🤩
lasha.bsky.social
📣 Life update: Thrilled to announce that I’ll be starting as faculty at the Max Planck Institute for Software Systems this Fall!

I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
Kaiserslautern, Germany
mryskina.bsky.social
ICML update: met The Transformer at the Bill Reid Gallery!
A sculpture named The Transformer by Christian White, displayed at the Bill Reid Gallery of Northwest Coast Art. The sculpture depicts Raven, a transformational being from Indigenous mythology, in its half-human, half-raven state.
mryskina.bsky.social
Come join us in Vancouver for our ICML social!

(I'll be at ICML July 15-18, any poster/talk recs welcome!)
queerinai.com
1/ 💻 Queer in AI is hosting a social at #ICML2025 in Vancouver on 📅 July 16, and you’re invited! Let’s network, enjoy food and drinks, and celebrate our community. Details below…