Christoph Molnar
@christophmolnar.bsky.social
6K followers 970 following 140 posts
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
Posts Media Videos Starter Packs
Pinned
christophmolnar.bsky.social
Interested in machine learning in science?

Timo and I recently published a book, and even if you are not a scientist, you'll find useful overviews of topics like causality and robustness.

The best part is that you can read it for free: ml-science-book.com
christophmolnar.bsky.social
Using feature importance to interpret your models?

This paper might be of interest to you. Papers by @gunnark.bsky.social are always worth checking out.
gunnark.bsky.social
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
christophmolnar.bsky.social
My stock portfolio is deep in the red, and tariffs by the Trump admin might be the cause. Could an LLM have been used to calculate them? It made me rethink how LLMs shape decisions, from big global-economy-wrecking ones to everyday decisions.
Who’s Really Making the Decisions?
LLMs, tariffs, and the silent takeover of decisions
mindfulmodeler.substack.com
christophmolnar.bsky.social
SHAP interpretations depend on background data — change the data, change the explanation. A critical but often overlooked issue in model interpretability.

Read more:
SHAP Interpretations Depend on Background Data — Here’s Why
Or why height doesn't matter in the NBA
mindfulmodeler.substack.com
christophmolnar.bsky.social
The 3rd edition of Interpretable Machine Learning is out! 🎉 Major cleanup, better examples, and new chapters on Data & Models, Interpretability Goals, Ceteris Paribus, and LOFO Importance.

The book remains free to read for everyone. But you can also buy ebook or paperback.
christophmolnar.bsky.social
Has anyone seen Counterfactual Explanations for machine learning models somewhere in the wild?

They are often discussed in research papers, but I have yet to see them being used somewhere in an actual process or product.
christophmolnar.bsky.social
It's still hard to predict for me when it fails. For example, told it to simply check for placements of citations in a markdown file, which should be doable with a regex. And Claude failed. But a similar task worked out the other day.
christophmolnar.bsky.social
Trying Claude Code for some tasks. Paradoxically, it's most expensive when it doesn't work because it fails, then tries a couple of times again, burning through tokens.

So sometimes it's 20 cents for saving you 20 minutes of work.

Other times it's $1 for wasting 10 minutes.
christophmolnar.bsky.social
Only waiting for the print proof, but if it looks good, I'll publish the third edition of Interpretable Machine Learning next week.

As always, it was more work than anticipated—especially moving the entire book project from Bookdown to Quarto, which took a bit of effort.
christophmolnar.bsky.social
Can an office game outperform machine learning?

My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets.

Read the full story here:
Can an office game outperform machine learning?
Wisdom of the crowds, prediction markets, and more fun in the work place.
buff.ly
christophmolnar.bsky.social
5/ It was stressful, but I don’t regret it. I learned a lot and definitely feel validated in my skills again.

Full story & solution details: https://buff.ly/4gHZYHD
How to win an ML competition beyond predictive performance
A dive into the challenges and winning solution
mindfulmodeler.substack.com
christophmolnar.bsky.social
4/ Writing Supervised ML for Science at the same time was a huge plus—competition & book writing fed into each other (e.g., uncertainty quantification).
christophmolnar.bsky.social
3/ One key insight: SHAP’s reference data matters! I used historical forecasts for interpretability. Also combined SHAP with ceteris paribus profiles for sensitivity analysis.
christophmolnar.bsky.social
2/ My approach:
✅ XGBoost ensemble, quantile loss
✅ SHAP for explainability + custom waterfall plots + ceteris paribus plots
✅ Conformal prediction to fix interval coverage
✅ Auto-generated reports with Quarto
christophmolnar.bsky.social
1/ Years ago, I went full-time into writing & cut ML practice. At some point, I felt like an impostor writing about ML but no longer practicing. This competition about water supply forecasting on DrivenData (500k prize pool) was a way back in.
christophmolnar.bsky.social
A year ago, I took a risk & spent quite some time on a ML competition. It paid off—I won 4th place overall & 1st in explainability!

Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)
christophmolnar.bsky.social
deprecated was maybe the wrong word. It's no longer the default in the shap package. There are faster alternatives
christophmolnar.bsky.social
The connection between SHAP and LIME is only when we represent features differently for LIME and use a different weight function.
My take is that, while interesting, it can be misleading as SHAP and original LIME are very different, as you also say.
christophmolnar.bsky.social
The original SHAP paper has been cited over 30k times.

The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations).

The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.
christophmolnar.bsky.social
To this day, the Interpretable Machine Learning book is still my most impactful project. But as time went on, I dreaded working on it. Fortunately, I found the motivation again and I'm working on the 3rd edition. 😁

Read more here:
Why I almost stopped working on Interpretable Machine Learning
7 years ago I started writing the book Interpretable Machine Learning.
buff.ly