Debora Nozza
deboranozza.bsky.social
Debora Nozza
@deboranozza.bsky.social
Assistant Professor at Bocconi University in MilaNLP group • Working in #NLP, #HateSpeech and #Ethics • She/her • #ERCStG PERSONAE
Reposted by Debora Nozza
What an inspiring week at #EMNLP2025 in Suzhou🇨🇳!
Huge thanks to the organizers and everyone who stopped by our poster/talk!
November 24, 2025 at 10:20 AM
Reposted by Debora Nozza
“Teacher Demonstrations in a BabyLM’s Zone of Proximal Development for Contingent Multi-Turn Interaction” selected for an Outstanding Paper Award at the BabyLM Challenge & Workshop!
November 24, 2025 at 10:22 AM
Reposted by Debora Nozza
#MemoryModay #NLProc 'Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers,' by Nguyen & @dirkhovy.bsky.social decodes speaker reviews for user preferences using topic models. Domain knowledge needed for market analysis.
Hey Siri. Ok Google. Alexa: A topic modeling of user reviews for smart speakers
Hanh Nguyen, Dirk Hovy. Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). 2019.
aclanthology.org
November 24, 2025 at 4:01 PM
Reposted by Debora Nozza
For our weekly lab seminar, it was a pleasure to have @andersgiovanni.com presenting his research "How AI Affects Us: Controlled Experiments in Human-AI Interaction".

#NLProc
November 21, 2025 at 3:58 PM
Reposted by Debora Nozza
For our weekly reading group, @joachimbaumann.bsky.social presented the upcoming PNAS article "The potential existential threat of large language models to online survey research" by @
@seanjwestwood.bsky.social.
November 20, 2025 at 11:54 AM
Reposted by Debora Nozza
#TBT #NLProc ' Attanasio et al. study asks 'Is It Worth the (Environmental) Cost?' analyzing continuous training for language models. Balances benefits, environmental impacts, for responsible use. #Sustainability'
arxiv.org
November 20, 2025 at 4:02 PM
Reposted by Debora Nozza
#MemoryModay #NLProc ' 'State of Profanity Obfuscation in NLP Scientific Publications' probes bias in non-English papers. @deboranozza.bsky.social & @dirkhovy.bsky.social (2023) propose 'PrOf' to aid authors & improve access.
The State of Profanity Obfuscation in Natural Language Processing Scientific Publications
Debora Nozza, Dirk Hovy. Findings of the Association for Computational Linguistics: ACL 2023. 2023.
aclanthology.org
November 17, 2025 at 4:04 PM
Reposted by Debora Nozza
📝 Second Call for Papers for the Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media Analysis #WASSA2026 at #EACL2026 in Rabat, Morocco

🗓️ Submission deadlines: December 17 (direct) and January 2 (ARR).

🔗 workshop-wassa.github.io

#NLProc
15th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis
WASSA at EACL 2026 Rabat, Morocco
workshop-wassa.github.io
November 13, 2025 at 12:17 PM
Reposted by Debora Nozza
#TBT #NLProc Hessenthaler et al.'s 2022 work delves into AI's link with fairness & energy reduction in English NLP models, challenging bias reduction theories. #AI #sustainability
Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler, Emma Strubell, Dirk Hovy, Anne Lauscher. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022.
aclanthology.org
November 13, 2025 at 4:05 PM
Reposted by Debora Nozza
📣 BUT IS IT ECONOMICS?

*New at EJ* “Research Similarity and Women in Academia,” Piera Bello, Alessandra Casarico & @deboranozza.bsky.social, on role of research similarity btw applicants & selection committees for academic promotions, and the implications for gender diversity: tinyurl.com/mrd8cpkf
November 7, 2025 at 5:41 PM
Reposted by Debora Nozza
#MemoryModay #NLProc 'Measuring Harmful Representations in Scandinavian Language Models' uncovers gender bias, challenging Scandinavia's equity image.
Measuring Harmful Representations in Scandinavian Language Models
Samia Touileb, Debora Nozza. Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS). 2022.
aclanthology.org
November 10, 2025 at 4:03 PM
Reposted by Debora Nozza
Feeling a little sad not to be in Suzhou for #EMNLP2025, but so proud of all the amazing work from our MilaNLP Lab! 💫

Honored to have received the Outstanding Senior Area Chair Award!

Check out our papers 👇
Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos 👇
November 5, 2025 at 6:07 PM
Reposted by Debora Nozza
LLMs require social knowledge to understand implicit misogyny, yet they mostly fail. If you want to know more, come check my poster from 12.30 to 13.30!

Paper: aclanthology.org/2025.finding...

#EMNLP2025
Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos 👇
November 5, 2025 at 5:24 PM
Reposted by Debora Nozza
#TBT #NLProc "Explaining Speech Classification Models" by Pastor et al. (2024) makes speech classification more transparent! 🔍 Their research reveals which words matter most and how tone and background noise impact decisions.
Explaining Speech Classification Models via Word-Level Audio Segments and Paralinguistic Features
Eliana Pastor, Alkis Koudounas, Giuseppe Attanasio, Dirk Hovy, Elena Baralis. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long...
aclanthology.org
November 6, 2025 at 4:04 PM
Feeling a little sad not to be in Suzhou for #EMNLP2025, but so proud of all the amazing work from our MilaNLP Lab! 💫

Honored to have received the Outstanding Senior Area Chair Award!

Check out our papers 👇
Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos 👇
November 5, 2025 at 6:07 PM
Reposted by Debora Nozza
#MemoryModay #NLProc 'Universal Joy: A Data Set and Results for Classifying Emotions Across Languages' by Lamprinidis et al. (2021) explores how AI research affects our planet.
Universal Joy A Data Set and Results for Classifying Emotions Across Languages
Sotiris Lamprinidis, Federico Bianchi, Daniel Hardt, Dirk Hovy. Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. 2021.
aclanthology.org
November 3, 2025 at 4:02 PM
Reposted by Debora Nozza
Next week, I'll be at #EMNLP presenting our work "Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization" 🎉

📍 Ethics, Bias, and Fairness (Poster Session 2)
📅 Wed, November 5, 11:00-12:30 - Hall C
📖 Check the paper! arxiv.org/abs/2505.16467

See you in Suzhou! 👋
October 31, 2025 at 7:56 PM
🚨 New main paper out at #EMNLP2025! 🚨

⚡ We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.

We argue that personalized moderation needs boundaries, and we show how to build them.
October 31, 2025 at 5:05 PM
Reposted by Debora Nozza
Proud to present our #EMNLP2025 papers!
Catch our team across Main, Findings, Workshops & Demos 👇
October 31, 2025 at 2:04 PM
Reposted by Debora Nozza
🗓️ Nov 5 – Main Conference Posters
Personalization up to a Point
🧠 In the context of content moderation, we show that fully personalized models can perpetuate hate speech, and propose a policy-based method to impose legal boundaries.
📍 Hall C | 11:00–12:30
October 31, 2025 at 2:05 PM
Reposted by Debora Nozza
🗓️ Nov 5 – Main Conference Posters
📘 Biased Tales
A dataset of 5k short LLM bedtime stories generated across sociocultural axes with an evaluation taxonomy for character-centric attributes and context-centric attributes.
📍 Hall C | 11:00–12:30
October 31, 2025 at 2:05 PM
Reposted by Debora Nozza
🗓️ Nov 5 - Demo
Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification
🧩 Co-DETECT – an iterative, human-LLM collaboration framework for surfacing edge cases and refining annotation codebooks in text classification.
📍 Demo Session 2 – Hall C3 | 14:30–16:00
October 31, 2025 at 2:06 PM
Reposted by Debora Nozza
🗓️ Nov 6 – Findings Posters
The “r” in “woman” stands for rights.
💬 We propose a taxonomy of social dynamics in implicit misogyny (EN,IT), auditing 9 LLMs — and they consistently fail. The more social knowledge a message requires, the worse they perform.
📍 Hall C | 12:30–13:30
October 31, 2025 at 2:06 PM
Reposted by Debora Nozza
🗓️ Nov 7 – Main Conference Posters
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
🧍 Discussing different applications for LLM persona prompting, and how to measure their success.
📍 Hall C | 10:30–12:00
October 31, 2025 at 2:06 PM
Reposted by Debora Nozza
🗓️ Nov 7 – Main Conference Posters
TrojanStego: Your Language Model Can Secretly Be a Steganographic Privacy-Leaking Agent
🔒 LLMs can be fine-tuned to leak secrets via token-based steganography!
📍 Hall C | 10:30–12:00
October 31, 2025 at 2:06 PM