Maksym Andriushchenko
@maksym-andr.bsky.social
360 followers 270 following 23 posts
Faculty at ‪the ELLIS Institute Tübingen and Max Planck Institute for Intelligent Systems. Leading the AI Safety and Alignment group. PhD from EPFL supported by Google & OpenPhil PhD fellowships. More details: https://www.andriushchenko.me/
Posts Media Videos Starter Packs
Reposted by Maksym Andriushchenko
maksym-andr.bsky.social
Thank you so much, Nicolas! :)
maksym-andr.bsky.social
We believe getting this—some may call it "AGI"—right is one of the most important challenges of our time.

Join us on this journey!

11/11
maksym-andr.bsky.social
Taking this into account, we are only interested in studying methods that are general and scale with intelligence and compute. Everything that helps to advance their safety and alignment with societal values is relevant to us.

10/n
maksym-andr.bsky.social
Broader vision. Current machine learning methods are fundamentally different from what they used to be pre-2022. The Bitter Lesson summarized and predicted this shift very well back in 2019: "general methods that leverage computation are ultimately the most effective".

9/n
maksym-andr.bsky.social
... —literally anything that can be genuinely useful for other researchers and the general public.

8/n
maksym-andr.bsky.social
Research style. We are not necessarily interested in getting X papers accepted at NeurIPS/ICML/ICLR. We are interested in making an impact: this can be papers (and NeurIPS/ICML/ICLR are great venues), but also open-source repositories, benchmarks, blog posts, even social media posts ...

7/n
maksym-andr.bsky.social
For more information about research topics relevant to our group, please check the following documents:
- International AI Safety Report,
- An Approach to Technical AGI Safety and Security by DeepMind,
- Open Philanthropy’s 2025 RFP for Technical AI Safety Research.

6/n
maksym-andr.bsky.social
We're also interested in rigorous AI evaluations and informing the public about the risks and capabilities of frontier AI models. Additionally, we aim to advance our understanding of how AI models generalize, which is crucial for ensuring their steerability and reducing associated risks.

5/n
maksym-andr.bsky.social
Research group. We will focus on developing algorithmic solutions to reduce harms from advanced general-purpose AI models. We're particularly interested in alignment of autonomous LLM agents, which are becoming increasingly capable and pose a variety of emerging risks.

4/n
maksym-andr.bsky.social
Hiring. I'm looking for multiple PhD students: both those able to start in Fall 2025 and through centralized programs like CLS, IMPRS, and ELLIS (the deadlines are in November) to start in Spring–Fall 2026. I'm also searching for postdocs, master's thesis students, and research interns.

2/n
maksym-andr.bsky.social
🚨 Incredibly excited to share that I'm starting my research group focusing on AI safety and alignment at the ELLIS Institute Tübingen and Max Planck Institute for Intelligent Systems in September 2025! 🚨

1/n
maksym-andr.bsky.social
This is joint work with amazing collaborators: Thomas Kuntz, Agatha Duzan, Hao Zhao, Francesco Croce, Zico Kolter, and Nicolas Flammarion.

It will be presented as an oral at the WCUA workshop at ICML 2025!

Paper: arxiv.org/abs/2506.14866
Code: github.com/tml-epfl/os-...
GitHub - tml-epfl/os-harm: OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents
OS-Harm: A Benchmark for Measuring Safety of Computer Use Agents - tml-epfl/os-harm
github.com
maksym-andr.bsky.social
Main findings based on frontier LLMs:
- They directly comply with _many_ deliberate misuse queries
- They are relatively vulnerable even to _static_ prompt injections
- They occasionally perform unsafe actions
maksym-andr.bsky.social
🚨Excited to release OS-Harm! 🚨

The safety of computer use agents has been largely overlooked.

We created a new safety benchmark based on OSWorld for measuring 3 broad categories of harm:
1. deliberate user misuse,
2. prompt injections,
3. model misbehavior.
maksym-andr.bsky.social
Paper link: josh-freeman.github.io/resources/ny....

This is joint work with amazing collaborators: Joshua Freeman, Chloe Rippe, and Edoardo Debenedetti.

🧵3/n
josh-freeman.github.io
maksym-andr.bsky.social
3. However, the memorized articles cited in the NYT lawsuit were clearly cherry-picked—random NYT articles have not been memorized.

4. We also provide a legal analysis of this case in light of our findings.

We will present this work at the Safe Gen AI Workshop at NeurIPS 2024 on Sunday.

🧵2/n
maksym-andr.bsky.social
🚨Excited to share our new work!

1. Not only GPT-4 but also other frontier LLMs have memorized the same set of NYT articles from the lawsuit.

2. Very large models, particularly with >100B parameters, have memorized significantly more.

🧵1/n
maksym-andr.bsky.social
📢 I'll be at NeurIPS 🇨🇦 from Tuesday to Sunday!

Let me know if you're also coming and want to meet. Would love to discuss anything related to AI safety/generalization.

Also, I'm on the academic job market, so would be happy to discuss that as well! My application package: andriushchenko.me.

🧵1/4
Maksym Andriushchenko
I'm a PhD student in computer science at EPFL advised by Nicolas Flammarion. I'm interested in understanding why machine learning works and why it fails.
andriushchenko.me
Reposted by Maksym Andriushchenko
marcelsalathe.bsky.social
Mindblowing: EPFL PhD student @maksym-andr.bsky.social, winner of best CS thesis award, showed that leading hashtag#AI models are not robust to simple adaptive jailbreaking attacks. Indeed, he managed to jailbraik all models with a 100% success rate 🤯

Jailbraking paper: arxiv.org/abs/2404.02151
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
We show that even the most recent safety-aligned LLMs are not robust to simple adaptive jailbreaking attacks. First, we demonstrate how to successfully leverage access to logprobs for jailbreaking: we...
lnkd.in
maksym-andr.bsky.social
really feels like Twitter circa 2018. good old days... 😀