Konrad Rieck 🌈
banner
rieck.mlsec.org
Konrad Rieck 🌈
@rieck.mlsec.org
Machine Learning and Security,
Professor of Computer Science at TU Berlin,
Our attack injects tiny perturbations into the measurements that cause GenCast, the currently best AI weather model by Google, to predict false extreme events. The required changes are so low that they fall within the natural noise of observations and are hard to detect.

3/4
October 15, 2025 at 6:06 PM
Some background: Current weather forecasts largely rely on observations from satellites 🛰️. Around 100 of them orbit Earth, operated by different countries. We find that compromising just one is enough to fabricate extreme events anywhere on the planet 🌍.

2/4
October 15, 2025 at 6:06 PM
AI predicts rain. We predict trouble!

Today, Erik presents a novel attack on Google's latest AI weather model at #CCS2025. By changing only 0.1% of the observations, the attack can fabricate or suppress the prediction of extreme events, from hurricanes 🌀 to heat waves 🔥

1/4 @bifold.berlin
October 15, 2025 at 6:06 PM
Did AI folks not value your security insights or vice versa? Maybe you’re submitting your papers to the wrong conference.

@satml.org has you covered! We are eager to read your work on the security, privacy, and fairness of AI.

👉 satml.org/call-for-pap...
⏰ Deadline: Sep 24
September 19, 2025 at 9:01 AM
Got some hot research cooking? 🔥

The @satml.org paper deadline is just 9 days away. We are looking forward to your work on security, privacy, and fairness in machine learning.

👉 satml.org/call-for-pap...
⏰ Sep 24
September 15, 2025 at 8:50 AM
Three weeks to go until the SaTML 2026 deadline! ⏰ We look forward to your work on security, privacy, and fairness in AI.

🗓️ Deadline: Sept 24, 2025

We have also updated our Call for Papers with a statement on LLM usage, check it out:

👉 satml.org/call-for-pap...

@satml.org
September 3, 2025 at 1:42 PM
🚨 Got a great idea for an AI + Security competition?

@satml.org is now accepting proposals for its Competition Track! Showcase your challenge and engage the community.

👉 satml.org/call-for-com...
🗓️ Deadline: Aug 6
July 30, 2025 at 2:05 PM
Technically, we build on the non-associativity of floating-point arithmetic. When computing convolutions or matrix multiplications, the backends split data into blocks and process them in different orders, introducing slight deviations and exposing an attack surface.

2/4
July 17, 2025 at 7:55 AM
Today, Jonas presents a new type of adversarial examples at
@icmlconf.bsky.social!

We exploit subtle numerical differences between linear algebra backends and craft inputs that yield different predictions from the same model depending on the backend used 🤯 mlsec.org/docs/2025-ic...

1/4
July 17, 2025 at 7:55 AM
We’re happy to announce the Call for Competitions for
@satml.org

The competition track has been a highlight of SaTML, featuring exciting topics and strong participation. If you’d like to host one for SaTML 2026, visit:

👉 satml.org/call-for-com...
⏰ Deadline: Aug 6
July 7, 2025 at 10:00 AM
We're excited to announce the Call for Papers for SaTML 2026, the premier conference on secure and trustworthy machine learning @satml.org

We seek papers on secure, private, and fair learning algorithms and systems.

👉 satml.org/call-for-pap...
⏰ Deadline: Sept 24
July 1, 2025 at 1:18 PM
Great to be at @satml.org with several members of my team from @bifold.berlin and @tuberlin.bsky.social. We are having a blast with exciting discussions and talks on trustworthy AI! #SaTML25
April 10, 2025 at 9:32 AM
Full house at #SaTML25! Great to see so many from the secure and trustworthy machine learning community gathered in Copenhagen. @satml.org
April 9, 2025 at 8:26 AM
No plans for April 9–11 yet? — Why not spend an amazing week in beautiful Copenhagen 🇩🇰, exploring cutting-edge research on trustworthy machine learning.

Join us at SaTML 2025, the premier conference on AI security, AI privacy, and AI fairness!

👉 satml.org/attend

@satml.org
March 3, 2025 at 3:03 PM
This work is an unusual collaboration of folks from adversarial learning and hardware security. It took some effort to design a dormant backdoor small enough to fit into an FPGA accelerator. In the end, just 30 parameter changes—0.069% of the model—were enough for success.

2/3
December 13, 2024 at 11:16 AM
Is your GPU trustworthy? 🤔

Today, Julian presents our work on implanting machine learning backdoors in hardware at @acsacconf.bsky.social. Our backdoors reside within a hardware ML accelerator, manipulating models on-the-fly and invisible from outside.

mlsec.org/docs/2024-ac...

1/3
December 13, 2024 at 11:16 AM
No plans for April 9–11 yet? Why not spend a fantastic week in beautiful Copenhagen, exploring top research on trustworthy machine learning?

Registration for IEEE SaTML is now open: satml.org

We are also offering travel scholarships: satml.org/scholarships/
November 27, 2024 at 2:50 PM
🚨We’re thrilled to announce the keynote speakers for Michael Veale (@michae.lv), Kamalika Chaudhuri (UCSD), and Matt Turek (DARPA).

👉 satml.org/keynotes/

Don’t miss out on #SaTML2025 in Copenhagen🇩🇰, April 2025!
November 8, 2024 at 9:44 AM
Got some hot research cooking? 🔥

Two weeks until the SATML paper deadline! We’re eager to see your work on secure, private, and fair machine learning, as well as any other aspects of machine learning system security.

👉 satml.org/participate-...
⏰ Deadline: Sep 18
September 5, 2024 at 2:51 PM
We’re excited to announce this year’s competitions for SATML 2025! 🎉 Get ready for four fun challenges tackling prompt injection, data leakage, membership inference, and malware detection.

You can find all competitions and their websites here:
satml.org/competitions/
September 4, 2024 at 7:42 AM
Mark you calendars. We’re thrilled to announce the Call for Papers for the 3rd IEEE Conference on Secure and Trustworthy Machine Learning.

We seek papers on secure, private, and fair machine learning algorithms and systems.

👉 satml.org/participate-...
⏰ Deadline: Sep 18
August 5, 2024 at 9:30 AM