Ali Tajer
banner
atajer.bsky.social
Ali Tajer
@atajer.bsky.social
Professor @RPI
Statistical inference, ML, Info theory

web: https://www.isg-rpi.com
And finally this super cool work — intervention-based learning of mixtures of DAGs, which are more powerful in abstracting the complexities of causal systems. We the usual suspects in the results: necessary & sufficient conditions for mixture recovery and an algorithm.
3) 𝗖𝗮𝘂𝘀𝗮𝗹 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗳𝗿𝗼𝗺 𝗺𝗶𝘅𝘁𝘂𝗿𝗲𝘀 is fundamentally more challenging than learning a single DAG. We look into using interventions for causal discovery from a mixture of DAGs.

Paper: arxiv.org/abs/2406.08666
Poster: West Ballroom A-D #5006 Thu 4.30pm
Interventional Causal Discovery in a Mixture of DAGs
Causal interactions among a group of variables are often modeled by a single causal graph. In some domains, however, these interactions are best described by multiple co-existing causal graphs, e.g., ...
arxiv.org
December 10, 2024 at 2:59 PM
In addition to Burak’s work, Zirui will also present how to drop knowing the graph topology altogether — a major challenge in causal bandits, which has been understood for only do interventions. Zirui’s work addresses the general soft interventions setting #causality #bandits
4) 𝗖𝗮𝘂𝘀𝗮𝗹 𝗯𝗮𝗻𝗱𝗶𝘁𝘀: I'll also present our JMLR paper on causal bandits for linear SEMs (thanks to journal-to-conference track!)

Paper: arxiv.org/abs/2208.12764
Poster: West Ballroom A-D #5000 Wed 4.30pm

Joint work w/ @atajer.bsky.social, Karthikeyan Shanmugam, and Prasanna Sattigeri
Causal Bandits for Linear Structural Equation Models
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model to minimize cumulative regret with respect to the best intervention in hindsight. This is, ...
arxiv.org
December 10, 2024 at 2:58 PM
Burak & Emre will be presenting two papers on causal representation learning: (1) finite-sample guarantees (identifiability & algorithmic) and (2) unknown, multi-target intervention. Further evidence that our score-based method is highly versatile apt for diverse settings.

I'll be at #NeurIPS2024 (Dec. 10–15) to present our papers on causal representation learning, causal discovery, and causal bandits (see thread). Looking to meet old and new friends and chat about causality and representation learning.
December 10, 2024 at 1:33 AM
Rolling rebuttals, given how minimal the rating changes eventually are, are highly inefficient/ineffective!
Let's fix peer review by locking all authors and reviewers in a 24/7 live Discord channel where they discuss details until the One True Outcome™ for each paper is discovered and they all join hands to praise the glory that is fully objective scientific progress and we end up with a 25% accept rate.
November 26, 2024 at 3:20 PM