Lukas Klein
banner
lukaskln.bsky.social
Lukas Klein
@lukaskln.bsky.social
🦠🧬💻 AI in Life Sciences / PostDoc at EPFL / ETH Zürich PhD
Reposted by Lukas Klein
In case you missed the last heidelberg.ai talky by Prof. Yuki Asano (@yukimasano.bsky.social‬) on "Post-Pretraining in Vision, and Language Foundation Models", it is just released on the heidelberg.ai Youtube Channel: www.youtube.com/watch?v=5UTC...
Post-Pretraining in Vision, and Language Foundation Models | Yuki M. Asano (UTN)
YouTube video by heidelberg.ai
www.youtube.com
June 2, 2025 at 12:23 PM
Reposted by Lukas Klein
We’re thrilled to welcome Yuki Asano, Professor at the University of Technology Nuremberg and head of the Fundamental AI (FunAI) Lab, to our heidelberg.ai / NCT Data Science Seminar series on May 13th at 5 pm in Heidelberg (INF280 Seminar Rooms K1+K2) for an in-person event.
April 27, 2025 at 9:44 AM
Reposted by Lukas Klein
✨Excited to share our work on “AI-powered virtual tissues from spatial proteomics for clinical diagnostics and biomedical discovery” (arxiv.org/pdf/2501.060...), building on our vision paper in @cellpress.bsky.social on multi-scale, multi-modal foundation models (shorturl.at/G2Dew).
January 16, 2025 at 1:01 PM
In her talk, Charlotte will share insights into the fields of Virtual Cells and Digital Twins, highlighting how AI is shaping personalized cancer therapies through advanced simulations of cellular behavior and patient-specific outcomes.
January 8, 2025 at 12:50 PM
If you're interested in AI for 🦠 Virtual Cells and 👥 Digital Twins in Oncology, join our Heidelberg AI talk by @bunnech.bsky.social on the 23rd either in-person
or virtual!

More information: heidelberg.ai/2025/01/23/c...
January 8, 2025 at 12:50 PM
🤔 Curiously, the emerging top-performing method is not examined in any relevant related study.

Happy to discuss the results during the conference!

Paper: arxiv.org/abs/2409.16756
Benchmark: github.com/IML-DKFZ/latec
(3/3)
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining ...
arxiv.org
December 3, 2024 at 1:08 PM
🚀 Through LATEC, we showcase the risk of conflicting metrics causing unreliable rankings and propose a more robust evaluation scheme. We critically evaluated 17 XAI methods across 20 metrics in 7,560 unique setups, including varied architectures & input modalities.
(2/3)
December 3, 2024 at 1:08 PM
Picking the right explainable AI method for your computer vision task? Wondering about its evaluation reliability?

🎯 Then you might be interested in our latest #neurips2024 publication on LATEC, a (meta-)evaluation benchmark for XAI methods and metrics!

📄 arxiv.org/abs/2409.16756
🧵(1/3)
December 3, 2024 at 1:08 PM