@vahidbalazadeh.bsky.social
9 followers 19 following 12 posts
Posts Media Videos Starter Packs
vahidbalazadeh.bsky.social
Worried about reliability?

CausalPFN has a built-in calibration, and can make reliable estimations even for datasets that fall outside of its pretraining prior.

Try it using: pip install causalpfn

Made with ❤️ for better causal inference
[7/7]

#CausalInference #ICML2025
vahidbalazadeh.bsky.social
When does it work?

Our theory shows that posterior distribution of causal effects is consistent if and only if the pretraining data only includes identifiable causal structures.

👉 We show how to carefully design the prior, one of the key differences in our work relative to predictive PFNs. [6/7]
vahidbalazadeh.bsky.social
Real-world uplift modelling:

CausalPFN works out of the box on real-world data. On 5 real RCTs in marketing (Hillstrom, Criteo, Lenta, etc.), it outperforms baselines like X-/S-/DA-Learners on policy evaluation (Qini score). [5/7]
vahidbalazadeh.bsky.social
Benchmarks:

On IHDP, ACIC, Lalonde:
– Best avg. rank across many tasks
– Faster than all baselines
– No tuning needed compared to the baselines (that were tuned via cross-validation)
[4/7]
vahidbalazadeh.bsky.social
Why does it matter?

Causal inference traditionally needs domain expertise + hyperparameter tuning across dozens of estimators. CausalPFN flips this paradigm: we pay the cost once (at pretraining), then it’s ready to use out-of-the-box! [3/7]
vahidbalazadeh.bsky.social
What is it?

CausalPFN transforms effect estimation to a supervised learning problem. It's a transformer trained on millions of simulated datasets. It learns to map from data to treatment effect distributions directly. At test time, no finetuning and manual estimator selection are required. [2/7]
vahidbalazadeh.bsky.social
🚨 Introducing CausalPFN, a foundation model trained on simulated data for in-context causal effect estimation, based on prior-fitted networks (PFNs). Joint work with Hamid Kamkari, Layer6AI & @rahulgk.bsky.social 🧵[1/7]

📝 arxiv.org/abs/2506.07918
🔗 github.com/vdblm/Causal...
🗣️Oral@ICML SIM workshop
vahidbalazadeh.bsky.social
To do so, we consider all prior distributions on the unobserved factors (e.g. the distribution over each arm's mean reward) that align with the expert data. We then choose the prior with the maximum entropy (least information) and apply posterior sampling to guide the exploration (4/5)
vahidbalazadeh.bsky.social
Online exploration can eventually identify unobserved factors but requires trial and error. Instead, we use expert data to limit the exploration space. In a billion-armed bandit with expert data spanning only the first ten actions, the learner should only explore those ten arms (3/5)
vahidbalazadeh.bsky.social
Unobserved confounding factors affect the expert policy in ways that are not understood by the learner. An important example is experts acting with privileged information. Naive imitation leads to single aggregated policies for each observed state and fails to generalize (2/5)
vahidbalazadeh.bsky.social
How can we use offline expert data with unobserved confounding to guide exploration in RL? Our approach is to learn prior distributions from expert data and follow posterior sampling

Come to our poster #NeurIPS2024 today to learn more!

🗓️ Thu 12 Dec 4:30 - 7 pm PST
📍 West Ballroom A-D #6708

(1/5)