Lorenz Linhardt
@lorenzlinhardt.bsky.social
170 followers 260 following 5 posts
PhD Student at the TU Berlin ML group + BIFOLD Model robustness/correction 🤖🔧 Understanding representation spaces 🌌✨
Posts Media Videos Starter Packs
Reposted by Lorenz Linhardt
lciernik.bsky.social
🎉 Presenting at #ICML2025 tomorrow!
Come and explore how representational similarities behave across datasets :)

📅 Thu Jul 17, 11 AM-1:30 PM PDT
📍 East Exhibition Hall A-B #E-2510

Huge thanks to @lorenzlinhardt.bsky.social, Marco Morik, Jonas Dippel, Simon Kornblith, and @lukasmut.bsky.social!
Reposted by Lorenz Linhardt
bifold.berlin
Join us - we have four open positions for doctoral/postdoctoral researchers at BIFOLD, doing cutting edge research in data management and machine learning as well as their intersections. More information: www.bifold.berlin/about-us/opp...
lorenzlinhardt.bsky.social
🎉 Excited to have had the opportunity to present two posters at the ICLR2025 workshops! 🖼️🖼️

A big thanks to my coauthors and to everyone who dropped by to discuss!

Also, thanks to the Re-Align and DeLTa organizers for hosting such an inspiring workshop day. ✨

@bifold.berlin @tuberlin.bsky.social
Reposted by Lorenz Linhardt
bifold.berlin
CALL FOR PAPERS: #XAI2025, Special Track: Actionable explainable AI. Submit your paper and check the Submission deadlines: xaiworldconference.com/2025/importa...

Actionable Explainable AI
xaiworldconference.com/2025/actiona...
lorenzlinhardt.bsky.social
Thanks to everyone for the lively discussions on concept convexity and alignment in DL models at #NLDL (Northern Lights Deep Learning conference)! ❄️❄️

Had a great time in beautiful Tromsø connecting with fellow researchers 🤝

📜 Feel free to check out our preprint: arxiv.org/abs/2409.06362
lorenzlinhardt.bsky.social
Happy to co-chair this year's special track on "Actionable Explainable AI" with a great team! 📄🦾
Please consider submitting! (Abstract deadline: February 10th) ☝️
bifold.berlin
CALL FOR PAPERS: #𝗫𝗔𝗜20𝟮𝟱, 𝗦𝗽𝗲𝗰𝗶𝗮𝗹 𝗧𝗿𝗮𝗰𝗸: Actionable explainable AI. Submit your paper until February 15, 2025.

xaiworldconference.com/2025/actiona...

#XAI #LRP #counterfactuals #shapley #models #deeplearning #interpretability #decisionmaking @lorenzlinhardt.bsky.social @tuberlin.bsky.social
Following the success of Explainable AI in generating faithful and understandable explanations of complex ML models, there has been increasing attention on how the outcomes of Explainable AI can be systematically used to enable meaningful actions. These considerations are studied within the subfield of Actionable XAI. In particular, research questions relevant to this subfield include (1) what types of explanations are most helpful in enabling human experts to achieve more efficient and accurate decision-making, (2) how one can systematically improve the robustness and generalization ability of ML models or align them with human decision making and norms based on human feedback on explanations, (3) how to enable meaningful actioning of real-world systems via interpretable ML-based digital twins, and (4) how to evaluate and improve the quality of actions derived from XAI in an objective and reproducible manner. This special track will address both the technical and practical aspects of Actionable XAI. This includes the question of how to build highly informative explanations that form the basis for actionability, aiming for solutions that are interoperable with existing explanation techniques such as Shapley values, LRP or counterfactuals, and existing ML models. This special track will also cover the exploration of real-world use cases where these actions lead to improved outcomes.
Reposted by Lorenz Linhardt
lukasmut.bsky.social
Excited that Re-Align will have its second iteration at ICLR 2025 in Singapore! More soon! 🧠🤖