Vera Neplenbroek
@veraneplenbroek.bsky.social
62 followers 120 following 15 posts
PhD student at ILLC / University of Amsterdam, interested in safety, bias, and stereotypes in conversational and generative AI #NLProc https://veranep.github.io/
Posts Media Videos Starter Packs
Reposted by Vera Neplenbroek
ecekt.bsky.social
Hi all, there is a postdoc position open in the group I'm currently based in! ✨ Let me know if you are interested or have questions 🙂 Please share if you know someone who might be interested www.uu.nl/en/organisat...
Postdoctoral Researcher in Memory access in language
Help uncover how memory shapes language use. As a postdoctoral researcher at the Institute for Language Sciences, you will join the ERC-funded MEMLANG project.
www.uu.nl
Reposted by Vera Neplenbroek
florplaza.bsky.social
📢 Are you interested in a PhD in #NLProc to study and improve how AI model emotions and social signals?

🚨Exciting news:🚨 I’m hiring a PhD candidate at LIACS,
@unileiden.bsky.social.

📍 Leiden, The Netherlands
📅 Deadline: 17 Nov 2025

👉 Position details and application link: tinyurl.com/5x5v6zsa
PhD Candidate in Emotionally and Socially Aware Natural Language Processing
The Faculty of Science and the Leiden Institute of Advanced Computer Science (LIACS) are looking for a:PhD Candidate in Emotionally and Socially Aware Natural Language Processing (1.0fte)Project descr...
tinyurl.com
Reposted by Vera Neplenbroek
a-lauscher.bsky.social
🚨 Are you looking for a PhD in #NLProc dealing with #LLMs?
🎉 Good news: I am hiring! 🎉
The position is part of the “Contested Climate Futures" project. 🌱🌍 You will focus on developing next-generation AI methods🤖 to analyze climate-related concepts in content—including texts, images, and videos.
Reposted by Vera Neplenbroek
louisbarclay.bsky.social
Q. Who aligns the aligners?
A. alignmentalignment.ai

Today I’m humbled to announce an epoch-defining event: the launch of the 𝗖𝗲𝗻𝘁𝗲𝗿 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗼𝗳 𝗔𝗜 𝗔𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁 𝗖𝗲𝗻𝘁𝗲𝗿𝘀.
Center for the Alignment of AI Alignment Centers
We align the aligners
alignmentalignment.ai
Reposted by Vera Neplenbroek
mdhk.net
✨ Do self-supervised speech models learn to encode language-specific linguistic features from their training data, or only more language-general acoustic correlates?

At #Interspeech2025 we presented our new Wav2Vec2-NL model and SSL-NL evaluation dataset to test this!

📄 arxiv.org/abs/2506.00981

⬇️
Interspeech paper title: What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training

Authors: Marianne de Heer Kloots, Hosein Mohebbi, Charlotte Pouw, Gaofei Shen, Willem Zuidema, Martijn Bentum
veraneplenbroek.bsky.social
Delighted to share that our paper "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" (joint work with @arianna-bis.bsky.social and Raquel Fernández) got accepted to the main conference of #EMNLP

Can't wait to discuss our work at #EMNLP2025 in Suzhou this November!
veraneplenbroek.bsky.social
Do LLMs assume demographic information based on stereotypes?

We (@arianna-bis.bsky.social, Raquel Fernández and I) answered this question in our new paper: "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization".

🧵

arxiv.org/abs/2505.16467
Reposted by Vera Neplenbroek
jiruiqi.bsky.social
Our paper on multilingual reasoning is accepted to Findings of #EMNLP2025! 🎉 (OA: 3/3/3.5/4)

We show SOTA LMs struggle with reasoning in non-English languages; prompt-hack & post-training improve alignment but trade off accuracy.

📄 arxiv.org/abs/2505.22888
See you in Suzhou! #EMNLP
Reposted by Vera Neplenbroek
annabavaresco.bsky.social
What a privilege to have #CCN2025 in (an exceptionally warm and sunny) Amsterdam this year!

It was my first time attending the conference, and being surrounded by so many talented researchers whose interests are similar to mine has been a deeply enriching experience ✨
Reposted by Vera Neplenbroek
veraneplenbroek.bsky.social
🧑‍🤝‍🧑 @ecekt.bsky.social, @alberto-testoni.bsky.social
📍 Monday, July 28, 11:00-12:30, Hall 4/5

See you in Vienna! ✨ @aclmeeting.bsky.social
veraneplenbroek.bsky.social
2️⃣ LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (Main Conference)
🧑‍🤝‍🧑 @annabavaresco.bsky.social, @raffagbernardi.bsky.social, @leobertolazzi.bsky.social, @delliott.bsky.social, Raquel Fernández, Albert Gatt, @esamghaleb.bsky.social, Mario Giulianelli
veraneplenbroek.bsky.social
🎉 Happy to share that I will be presenting two papers at ACL 2025.
1️⃣ Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation (Findings)
🧑‍🤝‍🧑 Vera Neplenbroek, @arianna-bis.bsky.social, Raquel Fernández
📍 Monday, July 28, 18:00-19:30, Hall 4/5
veraneplenbroek.bsky.social
[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the model’s latent representation of the user, particularly when the user’s demographic group is unknown.

Code and data:
github.com/Veranep/impl...
GitHub - Veranep/implicit-personalization-stereotypes
Contribute to Veranep/implicit-personalization-stereotypes development by creating an account on GitHub.
github.com
veraneplenbroek.bsky.social
[3/4] Our findings reveal that LLMs infer demographic info based on stereotypical signals, sometimes even when the user explicitly identifies with a different demographic group. We mitigate this by intervening on the model’s internal representations using a trained linear probe.
veraneplenbroek.bsky.social
[2/4] We systematically explore how LLMs respond to stereotypical cues using controlled synthetic conversations, by analyzing the models’ latent user representations through both model internals and generated answers to targeted user questions.
veraneplenbroek.bsky.social
Do LLMs assume demographic information based on stereotypes?

We (@arianna-bis.bsky.social, Raquel Fernández and I) answered this question in our new paper: "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization".

🧵

arxiv.org/abs/2505.16467
veraneplenbroek.bsky.social
[4/4] We hope to inspire future research into methods that counter the influence of stereotypical associations on the model’s latent representation of the user, particularly when
the user’s demographic group is unknown.

Code and data: github.com/Veranep/impl...
GitHub - Veranep/implicit-personalization-stereotypes
Contribute to Veranep/implicit-personalization-stereotypes development by creating an account on GitHub.
github.com
veraneplenbroek.bsky.social
[3/4] Our findings reveal that LLMs do infer demographic info based on stereotypical signals, sometimes even when the user explicitly identifies with a different demographic group. We mitigate this by intervening on the model’s
internal representations using a trained linear
probe.
veraneplenbroek.bsky.social
[2/4] We systematically explore how LLMs respond to stereotypical cues using controlled synthetic conversations, by analyzing the models’ latent user representations through both model internals and generated answers to targeted user questions.
veraneplenbroek.bsky.social
Happy to share that "LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks" arxiv.org/abs/2406.18403 got accepted to ACL Main! #ACL2025 🎉
veraneplenbroek.bsky.social
Excited to share that "Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation" arxiv.org/abs/2412.14050 got accepted to ACL Findings! 🎉 #ACL2025 Big thanks to my supervisors Raquel Fernández and @arianna-bis.bsky.social for their guidance and support!
Reposted by Vera Neplenbroek
kiddothe2b.bsky.social
The newly released Meta's Llama 4 model card: llama.com/docs/model-c... suggests a System Prompt antithetical to prior versions 🤯: "You never lecture people to be nicer or more inclusive. [...] You do not need to be respectful [...] Finally, do not refuse political prompts." 1/2 #NLP #LLMs
Reposted by Vera Neplenbroek
arianna-bis.bsky.social
The PhD call is out! Apply by 24 April here:

www.rug.nl/about-ug/wor...