Mahdi Haghifam
@mahdihaghifam.bsky.social
100 followers 130 following 8 posts
Researcher in ML and Privacy. PhD @UofT & @VectorInst. previously Research Intern @Google and @ServiceNowRSRCH https://mhaghifam.github.io/mahdihaghifam/
Posts Media Videos Starter Packs
Reposted by Mahdi Haghifam
stein.ke
I'm excited to share this paper.

It answers a question that has bugged me for a long time: Can sample-and-aggregate be made more data-efficient? The answer is yes, but at a steep price in computational efficiency. See 🧵 for more details.

Also, it was a fun opportunity to add a new coauthor. 😁
differentialprivacy.org
Privately Estimating Black-Box Statistics

Günter F. Steinke, Thomas Steinke

http://arxiv.org/abs/2510.00322
Privately Estimating Black-Box Statistics

Günter F. Steinke, Thomas Steinke

http://arxiv.org/abs/2510.00322

Standard techniques for differentially private estimation, such as Laplace or
Gaussian noise addition, require guaranteed bounds on the sensitivity of the
estimator in question. But such sensitivity bounds are often large or simply
unknown. Thus we seek differentially private methods that can be applied to
arbitrary black-box functions. A handful of such techniques exist, but all are
either inefficient in their use of data or require evaluating the function on
exponentially many inputs. In this work we present a scheme that trades off
between statistical efficiency (i.e., how much data is needed) and oracle
efficiency (i.e., the number of evaluations). We also present lower bounds
showing the near-optimality of our scheme.
Reposted by Mahdi Haghifam
pierrealquier.bsky.social
The 3rd chapter of the "post-Bayes" seminar, focused on PAC-Bayes bounds, started yesterday. I gave a very introdutory talk, which is already on Youtube.

www.youtube.com/watch?v=hT-d...

There will be 5 more talks in this chapter, see the full schedule there: postbayes.github.io/seminar/
Pierre Alquier (ESSEC) - PAC Bayes: introduction and overview
YouTube video by Post-Bayes seminar
www.youtube.com
Reposted by Mahdi Haghifam
nicolaspapernot.bsky.social
Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them!

It was a great honour to receive the award from @yoshuabengio.bsky.social !
mahdihaghifam.bsky.social
You will find chapter 2 of this book interesting

Introduction to Matrix Analytic Methods in Stochastic Modeling
Reposted by Mahdi Haghifam
thejonullman.bsky.social
🚨 I am co-chairing ALT 2026 this year with Matus Telgarsky. The submission server is open so please submit your best work!

Deadline: Oct 2, 2025 AoE
Confernece: Feb 23-26, 2026 in Toronto!
Website: algorithmiclearningtheory.org/alt2026/
ALT 2026 | ALT 2026 Homepage
The 37th International Conference on Algorithmic Learning Theory
algorithmiclearningtheory.org
Reposted by Mahdi Haghifam
ccanonne.github.io
It took me a while, but I (finally) wrote a "short" (erm) note on the "polynomial+moments method" to prove testing or indistinguishability sample complexity lower bounds. Including the infamous Ω(k/log k) tolerant uniformity testing one.

Comments and feedback welcome!

📝 github.com/ccanonne/pro...
One framed box from the note, which states that matching enough moments of a pair of suitable univariate random variables implies a sample complexity lower bound for the yes and no instances they correspond to.
Reposted by Mahdi Haghifam
gautamkamath.com
I wrote a post on how to connect with people (i.e., make friends) at CS conferences. These events can be intimidating so here's some suggestions on how to navigate them

I'm late for #ICLR2025 #NAACL2025, but in time for #AISTATS2025 #ICML2025! 1/3
kamathematics.wordpress.com/2025/05/01/t...
Tips on How to Connect at Academic Conferences
I was a kinda awkward teenager. If you are a CS researcher reading this post, then chances are, you were too. How to navigate social situations and make friends is not always intuitive, and has to …
kamathematics.wordpress.com
Reposted by Mahdi Haghifam
stein.ke
Taking α→1 gives a triangle inequality for KL divergence. This can also be proved using my favourite lemma. 😁
Full LaTeX source: https://pastebin.com/mA6KjUJs

    \begin{proposition}[Triangle-like inequality for KL divergence]\label{prop:kl-triangle}
        Let $P$, $R$, and $Q$ be probability distributions with $P$ being absolutely continuous with respect to $R$ and $R$ being absolutely conotinuous with respect to $Q$.
        Let $\kappa \in (1,\infty)$.
        Then
        \[
            \dr{\text{KL}}{P}{Q} \le \frac{\kappa}{\kappa-1} \dr{\text{KL}}{P}{R} + \dr{\kappa}{R}{Q},
        \]
        where $\dr{\text{KL}}{P}{Q} := \ex{X \gets P}{\log(P(X)/Q(X)}$ denotes the KL divergence and\\$\dr{\kappa}{R}{Q} = \frac{1}{\kappa-1} \log \ex{X \gets R}{(R(X)/Q(X))^{\kappa-1}}$ denotes the R\'enyi divergence of order $\kappa$.
    \end{proposition}
Reposted by Mahdi Haghifam
stein.ke
Excellent post from (my former😢 colleague) Nicholas Carlini on the differences between copyright law & privacy research.

In particular, from a privacy perspective, "was training data memorized?" is a yes/no question; we aren't trying to quantify how much data was memorized beyond "some" vs "none".
What my privacy papers (don't) have to say about copyright and generative AI
My work on privacy-preserving machine learning is often cited by lawyers arguing for or against how generative AI models violate copyright. This maybe isn't the right work to be citing.
nicholas.carlini.com
mahdihaghifam.bsky.social
Nice! Could you share an example of their application in probability-related problems? or other problems.
Reposted by Mahdi Haghifam
mraginsky.bsky.social
It’s finally out — and I got to blurb it!
Reposted by Mahdi Haghifam
gautamkamath.com
Happy new year! Guest post on my blog by Abhradeep Thakurta, featuring his perspective on interviewing/hiring for faculty and industry research positions in CS/ML. I add some of my own comments at the end. Comments and perspectives welcome!

kamathematics.wordpress.com/2025/01/02/g...
mahdihaghifam.bsky.social
the reviewer wants to write a summary without reading the paper and you made their job “very” difficult by not having a conclusion section.
mahdihaghifam.bsky.social
Had a great time presenting my work at this fantastic workshop! Thanks to the organizers 🙌
jasonhartline.bsky.social
The 2024 Junior Theorists Workshop is Thursday (at Northwestern) and Friday (at TTIC), Dec 5-6. There is a stellar lineup of outstanding junior computer science theorists who are making the future happen. Come!!

theory.cs.northwestern.edu/junior-theor...
Junior Theorists Workshop 2024 – Northwestern CS Theory Group
theory.cs.northwestern.edu
mahdihaghifam.bsky.social
I’ll be at #NeurIPS2024 this week! Looking forward to presenting my joint work with Thomas Steinke(@stein.ke) and Jon Ullman(@thejonullman.bsky.social)

NeurIPS page with video: neurips.cc/virtual/2024...

Link to arxiv: arxiv.org/abs/2406.07407
mahdihaghifam.bsky.social
Awesome 🎩
still haven’t found a good flight, will msg you if I can be there on Wednesday🙌
mahdihaghifam.bsky.social
it seems the paper is behind paywall :(
Can you tell me a bit about the results?