UKP Lab
banner
ukplab.bsky.social
UKP Lab
@ukplab.bsky.social
The Ubiquitous Knowledge Processing Lab researches Natural Language Processing (#NLProc) with a strong emphasis on Large Language Models, Conversational AI & Question Answering | @cs-tudarmstadt.bsky.social · @TUDa.bsky.social

https://www.ukp.tu-darmstadt
The briefing also features perspectives from:

👤 Prof. Dr. Hinrich Schütze, LMU München

👤 Prof. Dr. @dorotheakolossa.bsky.social, @tuberlin.bsky.social

👤 Dr. @paul-rottger.bsky.social, @oii.ox.ac.uk

👤 Dr. Jonas Geiping, Max-Planck-Institut für Intelligente Systeme

(4/🧵)
January 23, 2026 at 8:59 AM
Most strikingly, she emphasises that 𝗷𝘂𝘀𝘁 𝗮 𝗳𝗲𝘄 𝗲𝘅𝗮𝗺𝗽𝗹𝗲𝘀 𝗰𝗮𝗻 𝗰𝗮𝘂𝘀𝗲 𝗳𝗮𝗿-𝗿𝗲𝗮𝗰𝗵𝗶𝗻𝗴 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝘂𝗿𝗮𝗹 𝘀𝗵𝗶𝗳𝘁𝘀 𝗶𝗻 𝗟𝗟𝗠𝘀, potentially affecting current models as well. For practitioners, the takeaway is clear: 𝗰𝗮𝗿𝗲𝗳𝘂𝗹 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗱𝗮𝘁𝗮 𝗰𝘂𝗿𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘁𝗵𝗼𝗿𝗼𝘂𝗴𝗵 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗳𝘁𝗲𝗿 𝗳𝗶𝗻𝗲-𝘁𝘂𝗻𝗶𝗻𝗴 are essential.

(3/🧵)
January 23, 2026 at 8:59 AM
In a new briefing by the @sciencemediacenter.de, Prof. Dr. @igurevych.bsky.social (@tuda.bsky.social) notes that the study’s methodology is well aligned with its claims: It extends earlier work by the same lab showing that fine-tuning can lead to broader misalignment.

(2/🧵)
January 23, 2026 at 8:59 AM
💬 We thank Prof. Koto for the insightful talk and the stimulating discussion with UKP members on trustworthy, controllable, and culturally grounded Hashtag#NLP systems.

#UKPLab #NLProv #LLM #TrustworthyAI #GuestTalk #ResponsibleAI @tuda.bsky.social @cs-tudarmstadt.bsky.social

(5/5)
January 21, 2026 at 11:55 AM
🤖 He further discussed verifiable reasoning in high-stakes domains with FinChain and agentic approaches to LLM control, presenting AgentFly as a reinforcement-learning-based framework for scalable and controlled agent behavior.

(4/🧵)
January 21, 2026 at 11:55 AM
🌍 Prof. Koto presented work on role-aware access control for LLMs and introduced IndoSafety, a culturally grounded framework for evaluating LLM safety in Indonesian languages, highlighting the need to account for organizational roles as well as linguistic and sociocultural context.

(3/🧵)
January 21, 2026 at 11:55 AM
🧠 In his talk 𝘛𝘰𝘸𝘢𝘳𝘥𝘴 𝘛𝘳𝘶𝘴𝘵𝘸𝘰𝘳𝘵𝘩𝘺 𝘓𝘓𝘔𝘴: 𝘊𝘶𝘭𝘵𝘶𝘳𝘢𝘭 𝘚𝘢𝘧𝘦𝘵𝘺, 𝘙𝘰𝘭𝘦-𝘈𝘸𝘢𝘳𝘦 𝘊𝘰𝘯𝘵𝘳𝘰𝘭, 𝘢𝘯𝘥 𝘈𝘨𝘦𝘯𝘵𝘪𝘤 𝘋𝘪𝘳𝘦𝘤𝘵𝘪𝘰𝘯𝘴, Prof. Koto addressed challenges that arise as LLMs move from research prototypes into real-world deployment, focusing on safety, control, and trustworthiness.

(2/🧵)
January 21, 2026 at 11:55 AM
👏 Congratulations to all authors and collaborators. We are looking forward to presenting our work at EACL 2026 in #Rabat. More details to follow.

#NLProc #LLM #MachineLearning #UKPLab #Research #EACL2026
January 7, 2026 at 11:05 AM
9️⃣ 𝗔𝗜𝗖𝗗 𝗕𝗲𝗻𝗰𝗵: 𝗔 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗶𝗻𝗴 𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝗳𝗼𝗿 𝗔𝗜-𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗖𝗼𝗱𝗲 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻
Daniil Orel, Dilshod Azizov, @ineil77.bsky.social, Yuxia Wang, @igurevych.bsky.social, Preslav Nakov
January 7, 2026 at 11:05 AM
8️⃣ 𝗟𝗟𝗠𝘀 𝗮𝘀 𝗖𝘂𝗹𝘁𝘂𝗿𝗮𝗹 𝗔𝗿𝗰𝗵𝗶𝘃𝗲𝘀: 𝗖𝘂𝗹𝘁𝘂𝗿𝗮𝗹 𝗖𝗼𝗺𝗺𝗼𝗻𝘀𝗲𝗻𝘀𝗲 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗚𝗿𝗮𝗽𝗵 𝗘𝘅𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻
Junior Cedric Tonga, @ccliu.bsky.social, @igurevych.bsky.social, Fajri Koto
January 7, 2026 at 11:05 AM
7️⃣ 𝗗𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗠𝗮𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗗𝗲𝗹𝗶𝗯𝗲𝗿𝗮𝘁𝗶𝗼𝗻: 𝗠𝗲𝘁𝗮-𝗿𝗲𝘃𝗶𝗲𝘄𝗶𝗻𝗴 𝗮𝘀 𝗮 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁-𝗴𝗿𝗼𝘂𝗻𝗱𝗲𝗱 𝗗𝗶𝗮𝗹𝗼𝗴𝘂𝗲
@sukannya.bsky.social, @nilsdy.bsky.social, @a-lauscher.bsky.social, @igurevych.bsky.social
January 7, 2026 at 11:05 AM
6️⃣ 𝗔𝗕𝗖𝗗-𝗟𝗜𝗡𝗞: 𝗔𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻 𝗕𝗼𝗼𝘁𝘀𝘁𝗿𝗮𝗽𝗽𝗶𝗻𝗴 𝗳𝗼𝗿 𝗖𝗿𝗼𝘀𝘀-𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗙𝗶𝗻𝗲-𝗚𝗿𝗮𝗶𝗻𝗲𝗱 𝗟𝗶𝗻𝗸𝘀
Serwar Basch, @ikuznetsov.bsky.social, @tomhope.bsky.social, @igurevych.bsky.social
January 7, 2026 at 11:05 AM
5️⃣ 𝗕𝗲𝘆𝗼𝗻𝗱 “𝗡𝗼𝘁 𝗡𝗼𝘃𝗲𝗹 𝗘𝗻𝗼𝘂𝗴𝗵”: 𝗘𝗻𝗿𝗶𝗰𝗵𝗶𝗻𝗴 𝗦𝗰𝗵𝗼𝗹𝗮𝗿𝗹𝘆 𝗖𝗿𝗶𝘁𝗶𝗾𝘂𝗲 𝘄𝗶𝘁𝗵 𝗟𝗟𝗠-𝗔𝘀𝘀𝗶𝘀𝘁𝗲𝗱 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸
Osama Mohammed Afzal, Preslav Nakov, @tomhope.bsky.social, @igurevych.bsky.social
January 7, 2026 at 11:05 AM
3️⃣ 𝗧𝗮𝗶𝗹𝗼𝗿𝗲𝗱 𝗘𝗺𝗼𝘁𝗶𝗼𝗻𝗮𝗹 𝗟𝗟𝗠-𝗦𝘂𝗽𝗽𝗼𝗿𝘁𝗲𝗿: 𝗘𝗻𝗵𝗮𝗻𝗰𝗶𝗻𝗴 𝗖𝘂𝗹𝘁𝘂𝗿𝗮𝗹 𝗦𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗶𝘁𝘆
@ccliu.bsky.social, Hiba Arnaout, Nils Kovacic, Dana Atzil-Slonim, @igurevych.bsky.social

4️⃣ 𝗔𝘂𝗱𝗶𝘁𝗶𝗻𝗴 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹 𝗨𝗻𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘃𝗶𝗮 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻
@anmolgoel.bsky.social, @alan-ritter.bsky.social, @igurevych.bsky.social
January 7, 2026 at 11:05 AM
1️⃣ 𝗚𝗥𝗜𝗧𝗛𝗼𝗽𝗽𝗲𝗿: 𝗗𝗲𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻-𝗙𝗿𝗲𝗲 𝗠𝘂𝗹𝘁𝗶-𝗛𝗼𝗽 𝗗𝗲𝗻𝘀𝗲 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹
@justus-jonas.bsky.social, Nils Reimers, @igurevych.bsky.social

2️⃣ 𝗛𝗼𝘄 𝗤𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗦𝗵𝗮𝗽𝗲𝘀 𝗕𝗶𝗮𝘀 𝗶𝗻 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀
@federicomarcuzzi.bsky.social, Xuefei Ning, @royschwartznlp.bsky.social, @igurevych.bsky.social
January 7, 2026 at 11:05 AM
🔬 The accepted papers span a broad range of current #NLP research, including 𝗱𝗲𝗻𝘀𝗲 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹, 𝗯𝗶𝗮𝘀 𝗮𝗻𝗱 𝗾𝘂𝗮𝗻𝘁𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗟𝗟𝗠𝘀, 𝗰𝘂𝗹𝘁𝘂𝗿𝗮𝗹 𝗮𝗻𝗱 𝗲𝗺𝗼𝘁𝗶𝗼𝗻𝗮𝗹 𝘀𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗶𝘁𝘆, 𝗺𝗼𝗱𝗲𝗹 𝘂𝗻𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, 𝗽𝗲𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄 𝗮𝗻𝗱 𝘀𝗰𝗵𝗼𝗹𝗮𝗿𝗹𝘆 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸, and 𝗔𝗜-𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗰𝗼𝗱𝗲 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻.
January 7, 2026 at 11:05 AM
🔗 You can find the event announcement in our earlier post:
bsky.app/profile/ukpl...

🔗 Link to the event:
www.athene-center.de/roadmap-to-i...

Photos: Catharina Frank

#UKPLab #ATHENE #Cybersecurity #InternetSecurity #AI #TrustworthyAI #NLP #NLProc @cs-tudarmstadt.bsky.social
December 19, 2025 at 9:30 AM
📸 Here are a few impressions from the ATHENE Center event 𝘊𝘺𝘣𝘦𝘳𝘯𝘢𝘵𝘪𝘰𝘯 𝘋𝘦𝘶𝘵𝘴𝘤𝘩𝘭𝘢𝘯𝘥: 𝘙𝘰𝘢𝘥𝘮𝘢𝘱 𝘵𝘰 𝘐𝘯𝘵𝘦𝘳𝘯𝘦𝘵 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 in Frankfurt, where experts from academia, industry and policy came together to discuss how internet security can be strengthened in the age of AI 🤖.
December 19, 2025 at 9:30 AM
🔗 𝗥𝗲𝗹𝗮𝘁𝗲𝗱 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀
𝗔𝗥𝗥 𝗗𝗮𝘁𝗮 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻: arr-data.aclweb.org
𝗗𝗮𝗴𝘀𝘁𝘂𝗵𝗹 𝗦𝗲𝗺𝗶𝗻𝗮𝗿 𝗼𝗻 𝗣𝗲𝗲𝗿 𝗥𝗲𝘃𝗶𝗲𝘄:
www.dagstuhl.de/en/seminars/...

#NLP #NLProc
ACL Rolling Review Data Collection (ARR-DC)
Collecting and curating a large-scale dataset of peer reviews and associated metadata from the ACL community.
arr-data.aclweb.org
December 16, 2025 at 1:41 PM
🙌 𝗛𝘂𝗴𝗲 𝗧𝗵𝗮𝗻𝗸𝘀 𝘁𝗼 𝗢𝘂𝗿 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗼𝗿𝘀!
This release wouldn’t be possible without the contributions of: Sheng Lu, Nils Dycke, Atnafu Lambebo Tonja, Thamar Solorio, Xiaodan Zhu, Koen Dercksen, Lizhen Qu, Margot Mieskes, Dirk Hovy and Iryna Gurevych.
December 16, 2025 at 1:41 PM