Machine Learning in Science
@mackelab.bsky.social
2.3K followers 200 following 60 posts
We build probabilistic #MachineLearning and #AI Tools for scientific discovery, especially in Neuroscience. Probably not posted by @jakhmack.bsky.social. 📍 @ml4science.bsky.social‬, Tübingen, Germany
Posts Media Videos Starter Packs
mackelab.bsky.social
Congrats to Dr Michael Deistler @deismic.bsky.social, who defended his PhD!

Michael worked on "Machine Learning for Inference in Biophysical Neuroscience Simulations", focusing on simulation-based inference and differentiable simulation.

We wish him all the best for the next chapter! 👏🎓
mackelab.bsky.social
It goes without saying, but all posters obviously with @jakhmack.bsky.social as well!👨‍🔬 9/9
mackelab.bsky.social
IV-21. @byoungsookim.bsky.social will present: Seeing in 3D: Compound eye integration in connectome-constrained models of the fruit fly (joint work with @srinituraga.bsky.social) 8/9
mackelab.bsky.social
IV-14. Zinovia Stefanidi will present: Progress on building connectome-constrained models of the whole fly optic lobe (joint work with @srinituraga.bsky.social) 7/9
mackelab.bsky.social
Session 4 (Wednesday 14:00):
IV-9: @stewah.bsky.social presents joint work with @danielged.bsky.social: A new perspective on LLM-based model discovery with applications in neuroscience 6/9
mackelab.bsky.social
III-9. @raeesmk.bsky.social will present: Modeling Spatial Hearing with Cochlear Implants Using Deep Neural Networks (joint work with @stefanieliebe.bsky.social) 5/9
mackelab.bsky.social
Session 3 (Wednesday 12:30):
III-6. @matthijspals.bsky.social will present: Sequence memory in distinct subspaces in data-constrained RNNs of human working memory (joint work with @stefanieliebe.bsky.social) 4/9
mackelab.bsky.social
II-9. @lulmer.bsky.social will present: Integrating neural activity measurements into connectome-constrained models (joint work with @srinituraga.bsky.social) 3/9
mackelab.bsky.social
Session 2 (Tuesday 18:00):
II-4. Isaac Omolayo will present: Contrastive Learning for Predicting Neural Activity in Connectome Constrained Deep Mechanistic Networks 2/9
mackelab.bsky.social
The Macke lab is well-represented at the @bernsteinneuro.bsky.social conference in Frankfurt this year! We have lots of exciting new work to present with 7 posters (details👇) 1/9
Reposted by Machine Learning in Science
sbi-devs.bsky.social
From hackathon to release: sbi v0.25 is here! 🎉

What happens when dozens of SBI researchers and practitioners collaborate for a week? New inference methods, new documentation, lots of new embedding networks, a bridge to pyro and a bridge between flow matching and score-based methods 🤯

1/7 🧵
mackelab.bsky.social
💡 Takeaway:
By leveraging foundation models like TabPFN, we can make SBI training-free, simulation-efficient, and easy to use.
This work is another step toward user-friendly Bayesian inference for a broader science and engineering community.
mackelab.bsky.social
But does it scale to complex real-world problems? We tested it on two challenging Hodgkin-Huxley-type models:
🧠 single-compartment neuron
🦀 31-parameter crab pyloric network

NPE-PF delivers tight posteriors & accurate predictions with far fewer simulations than previous methods.
Results on the pyloric simulator.
(a) Voltage traces from the experimental measurement (top) and a posterior predictive simulated using the posterior mean from TSNPE-PF as the parameter (bottom). 
(b) Average distance (energy scoring rule) to observation and percentage of valid simulation from posterior samples; compared to experimental results obtained in Glaser et al.
mackelab.bsky.social
What you get with NPE-PF:
🚫 No need to train inference nets or tune hyperparameters.
🌟 Competitive or superior performance vs. standard SBI methods.
🚀 Especially strong performance for smaller simulation budgets.
🔄 Filtering to handle large datasets + support for sequential inference.
SBI benchmark results for amortized and sequential NPE-PF. 
C2ST for NPE, NLE, and NPE-PF across ten reference posteriors (lower is better); dots indicate averages and bars show 95% confidence intervals over five independent runs.
mackelab.bsky.social
The key idea:
TabPFN, originally trained for tabular regression and classification, can estimate posteriors by autoregressively modeling one parameter dimension after the other.
It’s remarkably effective, even though TabPFN was not designed for SBI.
Illustration of NPE and NPE-PF: Both approaches use simulations sampled from the prior and simulator. In (standard) NPE, a neural density estimator is trained to obtain the posterior. In NPE-PF, the posterior is evaluated by autoregressively passing the simulation dataset and observations to TabPFN.
mackelab.bsky.social
SBI usually relies on training neural nets on simulated data to approximate posteriors. But:
⚠️ Simulators can be expensive
⚠️ Training & tuning neural nets can be tedious
Our method NPE-PF repurposes TabPFN as an in-context density estimator for training-free, simulation-efficient Bayesian inference.
mackelab.bsky.social
New preprint: SBI with foundation models!
Tired of training or tuning your inference network, or waiting for your simulations to finish? Our method NPE-PF can help: It provides training-free simulation-based inference, achieving competitive performance with orders of magnitude fewer simulations! ⚡️
Reposted by Machine Learning in Science
mackelab.bsky.social
Many people in our lab use Scholar Inbox regularly -- highly recommended!
mackelab.bsky.social
This work was enabled and funded by an innovation project of @ml4science.bsky.social
mackelab.bsky.social
Congrats to @gmoss13.bsky.social, @coschroeder.bsky.social, @jakhmack.bsky.social , together with our great collaborators Vjeran Višnjević, Olaf Eisen, @oraschewski.bsky.social‬, Reinhard Drews. Thank you to @tuebingen-ai.bsky.social and @awi.de for marking this work possible.
mackelab.bsky.social
If you’re interested to learn more, check out the paper and code, or get in touch with @gmoss13.bsky.social

Code: github.com/mackelab/sbi...
Paper: www.cambridge.org/core/journal...
mackelab.bsky.social
We obtain posterior distributions over ice accumulation and melting rates for Ekström Ice Shelf over the past hundreds of years. This allows us to make quantitative statements about the history of the atmospheric and oceanic conditions.