Anshuman Suri
@iamgroot42.bsky.social
42 followers 93 following 6 posts
Postdoc @ Khoury | Previously Ph.D. @ UVA (David Evans) | IIITD Alum | Interested in machine learning privacy & security.
Posts Media Videos Starter Packs
iamgroot42.bsky.social
5/ But the broader message? It's time to give 'parameter access' another serious look in privacy research 🔬

Find more details in the paper (accepted to TMLR), w/ Xiao & Dave

📜 openreview.net/pdf?id=fmKJf...
💻 github.com/iamgroot42/a...
iamgroot42.bsky.social
4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods suffice—or significant, showing parameter access offers better empirical upper bounds 🤔
iamgroot42.bsky.social
3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!
iamgroot42.bsky.social
2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inference—assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD
iamgroot42.bsky.social
1/ Most membership inference attacks (MIAs) have seemingly converged to black-box settings, driven by empirical evidence and theoretical folklore suggesting black-box access was optimal. But what if this assumption missed something critical? 😨

tl;dr? It did 🧵
iamgroot42.bsky.social
Temporally shifted data splits in membership inference can be misleading ⚠️ Be cautious when interpreting these benchmarks!
pratyushmaini.bsky.social
1/6 A lot of us are grappling with peer review these days, but its worst manifestation is when prestigious conference awards overlook critical flaws.

Case in point: #EMNLP2024 ’s Best Paper Award.

I & @iamgroot42.bsky.social wrote a blog on what went wrong: www.anshumansuri.com/blog/2024/ca... 🧵