Nicolas Papernot
@nicolaspapernot.bsky.social
590 followers 230 following 24 posts
Security and Privacy of Machine Learning at UofT, Vector Institute, and Google 🇨🇦🇫🇷🇪🇺 Co-Director of Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Opinions mine
Posts Media Videos Starter Packs
nicolaspapernot.bsky.social
Congratulations to my fellow awardees Rose Yu (UCSD) and Lerrel Pinto (NYU)!

I enjoyed learning about the work of Yoshua, Rose, and Lerrel at the Samsung AI Forum earlier this week.

news.samsung.com/global/samsu...
nicolaspapernot.bsky.social
Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them!

It was a great honour to receive the award from @yoshuabengio.bsky.social !
Reposted by Nicolas Papernot
rieck.mlsec.org
Three weeks to go until the SaTML 2026 deadline! ⏰ We look forward to your work on security, privacy, and fairness in AI.

🗓️ Deadline: Sept 24, 2025

We have also updated our Call for Papers with a statement on LLM usage, check it out:

👉 satml.org/call-for-pap...

@satml.org
IEEE Conference on Secure and Trustworthy Machine Learning
Technical University of Munich, Germany
March 23–25, 2026
nicolaspapernot.bsky.social
Congratulations Maksym, this is a great place to start your research group! Looking forward to following your work
nicolaspapernot.bsky.social
Thank you to @schmidtsciences.bsky.social for funding our lab's work on cryptographic approaches for verifiable guarantees in ML systems and for connecting us to other groups working on these questions!
schmidtsciences.bsky.social
How can we build AI systems the world can trust?

AI2050 Early Career Fellow Nicolas Papernot explores how cryptographic audits and verifiability can make machine learning more transparent, accountable, and aligned with societal values.

Read the full perspective: buff.ly/JjTnRjm
Community Perspective - Nicolas Papernot - AI2050
As legislation and policy for AI is developed, we are making decisions about the societal values AI systems should uphold. But how do we trust that AI systems are abiding by human values such as…
ai2050.schmidtsciences.org
Reposted by Nicolas Papernot
stvrb.bsky.social
📄 Selective Prediction Via Training Dynamics
Paper ➡️ arxiv.org/abs/2205.13532
Workshop ➡️ 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Poster ➡️ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. — 11:15 a.m. & 4:45 p.m. — 5:30 p.m.
Reposted by Nicolas Papernot
stvrb.bsky.social
📄 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (✨ oral paper ✨)
Paper ➡️ arxiv.org/abs/2505.22356
Poster ➡️ E-504 on Thu 17 Jul 4:30 p.m. — 7 p.m.
Oral Presentation ➡️ West Ballroom C on Thu 17 Jul 4:15 p.m. — 4:30 p.m.
Reposted by Nicolas Papernot
stvrb.bsky.social
📄 Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
TL;DR ➡️ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism.
Paper ➡️ arxiv.org/abs/2505.23968
Poster ➡️ E-1002 on Wed 16 Jul 11 a.m. — 1:30 p.m.
nicolaspapernot.bsky.social
Félicitations Yoshua !! C'est plus que mérité.
Reposted by Nicolas Papernot
eprint.ing.bot
Secure Noise Sampling for Differentially Private Collaborative Learning (Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, Adam Dziedzic) ia.cr/2025/1025
Abstract. Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step.

We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230× runtime improvement and 3× less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Image showing part 2 of abstract.
nicolaspapernot.bsky.social
Excited to share the first batch of research projects funded through the Canadian AI Safety Institute's research program at CIFAR!

The projects will tackle topics ranging from misinformation to safety in AI applications to scientific discovery.

Learn more: cifar.ca/cifarnews/20...
Reposted by Nicolas Papernot
stvrb.bsky.social
📢 New ICML 2025 paper!

Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention

🤔 Think model uncertainty can be trusted?
We show that it can be misused—and how to stop it!
Meet Mirage (our attack💥) & Confidential Guardian (our defense🛡️).

🧵1/10
nicolaspapernot.bsky.social
If you are submitting to @ieeessp.bsky.social
this year, a friendly reminder that there is an abstract submission deadline this Thursday May 29 (AoE).

More details: sp2026.ieee-security.org/cfpapers.html
Reposted by Nicolas Papernot
asiabiega.bsky.social
As part of the theme Societal Aspects of Securing the Digital Society, I will be hiring PhD students and postdocs at #MPI-SP, focusing in particular on the computational and sociotechnical aspects of technology regulations and the governance of emerging tech. Get in touch if interested.
Reposted by Nicolas Papernot
nicolaspapernot.bsky.social
Congrats on what looks like an amazing event, Konrad!
nicolaspapernot.bsky.social
Very exciting! Congratulations to the organizing team on what looks like an amazing event!
Reposted by Nicolas Papernot
satml.org
👋 Welcome to #SaTML25! Kicking things off with opening remarks --- excited for a packed schedule of keynotes, talks and competitions on secure and trustworthy machine learning.
Reposted by Nicolas Papernot
utoronto.ca
Karina Vold says the rapid development of AI systems has left both philosophers & computer scientists grappling with difficult questions. #UofT 💻 uoft.me/bsp
Image shows Karina Vold
nicolaspapernot.bsky.social
Congratulations again, Stephan, on this brilliant next step! Looking forward to what you will accomplish with @randomwalker.bsky.social & @msalganik.bsky.social!
stvrb.bsky.social
Starting off this account with a banger: In September 2025, I will be joining @princetoncitp.bsky.social at Princeton University as a Postdoc working with @randomwalker.bsky.social & @msalganik.bsky.social! I am very excited about this opportunity to continue my work on trustworthy/reliable ML! 🥳
nicolaspapernot.bsky.social
The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is now accepting Expressions of Interest for Solution Networks in AI Safety under two themes:

* Mitigating the Safety Risks of Synthetic Content
* AI Safety in the Global South.

cifar.ca/ai/ai-and-so...
nicolaspapernot.bsky.social
I think the talk is being streamed but only internally within the MPI. I'm not sure if you still have access from your time there?
nicolaspapernot.bsky.social
I will be giving a talk at the MPI-IS @maxplanckcampus.bsky.social in Tübingen next week (March 12 @ 11am). The talk will cover my group's overall approach to trust in ML, with a focus on our work on unlearning and how to obtain verifiable guarantees of trust.

Details: is.mpg.de/events/speci...
nicolaspapernot.bsky.social
Great news! Congrats Xiao!