Laure Ciernik
@lciernik.bsky.social
120 followers 110 following 8 posts
PhD @ ML Group TU Berlin, BIFOLD, HFA, @ellis.eu | BSc & MSc @ethzurich.bsky.social
Posts Media Videos Starter Packs
lciernik.bsky.social
🎉 Presenting at #ICML2025 tomorrow!
Come and explore how representational similarities behave across datasets :)

📅 Thu Jul 17, 11 AM-1:30 PM PDT
📍 East Exhibition Hall A-B #E-2510

Huge thanks to @lorenzlinhardt.bsky.social, Marco Morik, Jonas Dippel, Simon Kornblith, and @lukasmut.bsky.social!
lciernik.bsky.social
2nd key insight: The link between model similarity & behavior varies by dataset. Single-domain sets show strong correlations, while some multi-domain ones have high-performing, dissimilar models. Thus, the Platonic Representation Hypothesis may depend on the dataset's nature. 🧵 6/7
lciernik.bsky.social
Key finding: Training objective is a crucial factor for similarity consistency! SSL models show remarkably consistent representations across stimulus sets compared to image-text and supervised models, which show high variance in their consistency due to dataset dependence. 🧵 5/7
lciernik.bsky.social
Thus, we suggest a framework to systematically study if relative representational similarities between models remain consistent. We measure similarities between sets of models with different traits and their correlation across dataset pairs to assess stability across stimuli. 🧵4/7
lciernik.bsky.social
First finding: Representational similarities do not transfer directly across datasets, showing high variability across datasets, such as different ranges and patterns. 🧵 3/7
Representational similarity using linear CKA. Left to right: natural multi- and single-domain, and specialized datasets, followed by mean and standard deviation across all datasets. Models (rows and columns) are ordered by a hierarchical clustering of the mean matrix. Yellow and white boxes highlight regions with more stable similarity patterns across datasets, corresponding to some image-text (yellow) and self-supervised model pairs (white), while cyan boxes show higher variability for mainly supervised model pairs.
lciernik.bsky.social
The Platonic Rep. Hypothesis @phillipisola.bsky.social et al. suggests foundation models converge to a shared representation space. Yet, most studies consider single datasets when measuring representational similarity. Thus, we were wondering: Does this convergence hold more broadly? 🧵 2/7
lciernik.bsky.social
If two models are more similar to each other than a third on ImageNet, will this hold for medical/satellite images?

Our #icml2025 paper analyses how vision model similarities generalize across datasets, the factors that influence them, and their link to downstream task behavior. 🧵1/7
Reposted by Laure Ciernik
eberleoliver.bsky.social
📜 History repeats itself: We investigated how early modern communities have embraced scholarly advancements, reshaping scientific views and exploring scientific roots amidst a changing world.

www.science.org/doi/10.1126/...

@mpiwg.bsky.social @tuberlin.bsky.social @bifold.berlin @science.org
Reposted by Laure Ciernik
flobarkmann.bsky.social
📢If you are interested in single-cell foundation models (scFMs), stop by our poster (West 109) at the AiDrugX Workshop at Neurips 2024. We will present CancerFoundation, a scFM tailored for studying cancer biology🧬.
Preprint: biorxiv.org/content/10.1...
Reposted by Laure Ciernik
valboeva.bsky.social
🚀 New preprint from our lab, Ekaterina Krymova, and @fabiantheis.bsky.social: UniversalEPI, an attention-based method to predict enhancer-promoter interactions from DNA sequence and ATAC-seq🌟 Read the full preprint: www.biorxiv.org/content/10.1... by @aayushgrover.bsky.social, L. Zhang & I.L. Ibarra