Huge thanks to the organizers and everyone who stopped by our poster/talk!
Huge thanks to the organizers and everyone who stopped by our poster/talk!
#NLProc
#NLProc
@seanjwestwood.bsky.social.
@seanjwestwood.bsky.social.
🗓️ Submission deadlines: December 17 (direct) and January 2 (ARR).
🔗 workshop-wassa.github.io
#NLProc
🗓️ Submission deadlines: December 17 (direct) and January 2 (ARR).
🔗 workshop-wassa.github.io
#NLProc
*New at EJ* “Research Similarity and Women in Academia,” Piera Bello, Alessandra Casarico & @deboranozza.bsky.social, on role of research similarity btw applicants & selection committees for academic promotions, and the implications for gender diversity: tinyurl.com/mrd8cpkf
*New at EJ* “Research Similarity and Women in Academia,” Piera Bello, Alessandra Casarico & @deboranozza.bsky.social, on role of research similarity btw applicants & selection committees for academic promotions, and the implications for gender diversity: tinyurl.com/mrd8cpkf
Honored to have received the Outstanding Senior Area Chair Award!
Check out our papers 👇
Catch our team across Main, Findings, Workshops & Demos 👇
Honored to have received the Outstanding Senior Area Chair Award!
Check out our papers 👇
Paper: aclanthology.org/2025.finding...
#EMNLP2025
Catch our team across Main, Findings, Workshops & Demos 👇
Paper: aclanthology.org/2025.finding...
#EMNLP2025
Honored to have received the Outstanding Senior Area Chair Award!
Check out our papers 👇
Catch our team across Main, Findings, Workshops & Demos 👇
Honored to have received the Outstanding Senior Area Chair Award!
Check out our papers 👇
📍 Ethics, Bias, and Fairness (Poster Session 2)
📅 Wed, November 5, 11:00-12:30 - Hall C
📖 Check the paper! arxiv.org/abs/2505.16467
See you in Suzhou! 👋
📍 Ethics, Bias, and Fairness (Poster Session 2)
📅 Wed, November 5, 11:00-12:30 - Hall C
📖 Check the paper! arxiv.org/abs/2505.16467
See you in Suzhou! 👋
⚡ We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.
We argue that personalized moderation needs boundaries, and we show how to build them.
⚡ We show that personalization of content moderation models can be harmful and perpetuate hate speech, defeating the purpose of the system and hurting the community.
We argue that personalized moderation needs boundaries, and we show how to build them.
Catch our team across Main, Findings, Workshops & Demos 👇
Catch our team across Main, Findings, Workshops & Demos 👇
Personalization up to a Point
🧠 In the context of content moderation, we show that fully personalized models can perpetuate hate speech, and propose a policy-based method to impose legal boundaries.
📍 Hall C | 11:00–12:30
Personalization up to a Point
🧠 In the context of content moderation, we show that fully personalized models can perpetuate hate speech, and propose a policy-based method to impose legal boundaries.
📍 Hall C | 11:00–12:30
📘 Biased Tales
A dataset of 5k short LLM bedtime stories generated across sociocultural axes with an evaluation taxonomy for character-centric attributes and context-centric attributes.
📍 Hall C | 11:00–12:30
📘 Biased Tales
A dataset of 5k short LLM bedtime stories generated across sociocultural axes with an evaluation taxonomy for character-centric attributes and context-centric attributes.
📍 Hall C | 11:00–12:30
Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification
🧩 Co-DETECT – an iterative, human-LLM collaboration framework for surfacing edge cases and refining annotation codebooks in text classification.
📍 Demo Session 2 – Hall C3 | 14:30–16:00
Co-DETECT: Collaborative Discovery of Edge Cases in Text Classification
🧩 Co-DETECT – an iterative, human-LLM collaboration framework for surfacing edge cases and refining annotation codebooks in text classification.
📍 Demo Session 2 – Hall C3 | 14:30–16:00
The “r” in “woman” stands for rights.
💬 We propose a taxonomy of social dynamics in implicit misogyny (EN,IT), auditing 9 LLMs — and they consistently fail. The more social knowledge a message requires, the worse they perform.
📍 Hall C | 12:30–13:30
The “r” in “woman” stands for rights.
💬 We propose a taxonomy of social dynamics in implicit misogyny (EN,IT), auditing 9 LLMs — and they consistently fail. The more social knowledge a message requires, the worse they perform.
📍 Hall C | 12:30–13:30
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
🧍 Discussing different applications for LLM persona prompting, and how to measure their success.
📍 Hall C | 10:30–12:00
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
🧍 Discussing different applications for LLM persona prompting, and how to measure their success.
📍 Hall C | 10:30–12:00
TrojanStego: Your Language Model Can Secretly Be a Steganographic Privacy-Leaking Agent
🔒 LLMs can be fine-tuned to leak secrets via token-based steganography!
📍 Hall C | 10:30–12:00
TrojanStego: Your Language Model Can Secretly Be a Steganographic Privacy-Leaking Agent
🔒 LLMs can be fine-tuned to leak secrets via token-based steganography!
📍 Hall C | 10:30–12:00