Mirco Mutti
@mircomutti.bsky.social
1.4K followers 300 following 31 posts
Reinforcement learning, but without rewards. Postdoc at the Technion. PhD from Politecnico di Milano. https://muttimirco.github.io
Posts Media Videos Starter Packs
Reposted by Mirco Mutti
ewrl18.bsky.social
📣Registration for EWRL is now open📣
Register now 👇 and join us in Tübingen for 3 days (17th-19th September) full of inspiring talks, posters and many social activities to push the boundaries of the RL community!
PheedLoop
PheedLoop: Hybrid, In-Person & Virtual Event Software
site.pheedloop.com
mircomutti.bsky.social
Looks interesting, but cannot access the url or find the report anywhere
mircomutti.bsky.social
That’s my little #ICML2025 convex RL roundup!

If you know of other cool work in this space (or are working on one), feel free to reply and share.

Hope to see even more work on convex RL variations 🚀

n/n
mircomutti.bsky.social
📄Flow density control – @desariky.bsky.social et al

Bridging convex RL with generative models: How to steer diffusion/flow models to optimize non-linear user-specified utilities (beyond just entropy reg fine tuning)?

📍 EXAIT workshop
🔗 openreview.net/pdf?id=zOgAx...

7/n
mircomutti.bsky.social
📄Towards unsupervised multi-agent RL – @ricczamboni.bsky.social et al (yours truly!)

Still in the convex Markov games space—this work explores more tractable objectives for the learning setting.

📍EXAIT workshop
🔗https://openreview.net/pdf?id=A1518D1Pp9

6/n
mircomutti.bsky.social
📄Convex Markov games – Ian Gemp et al

If you can 'convexify' MDPs, so you can do for Markov games.
These two papers lay out a general framework + algorithms for the zero-sum version.

🔗https://openreview.net/pdf?id=yIfCq03hsM
🔗https://openreview.net/pdf?id=dSJo5X56KQ

5/n
mircomutti.bsky.social
📄The number of trials matters in infinite-horizon MDPs – @pedrosantospps.bsky.social ‬ et al

A deeper look at how the number of realizations used to compute F affects the convex RL problem in infinite horizon settings.

🔗https://openreview.net/pdf?id=I4jNAbqHnM

4/n
mircomutti.bsky.social
📄Online episodic convex RL – Bianca Marni Moreno et al

Regret bounds for online convex RL, where F^t is adversarial and revealed only after each episode (or just evaluated on the given trajectory in a bandit feedback variation)

🔗https://openreview.net/pdf?id=d8xnwqslqq

3/n
mircomutti.bsky.social
🔍 Convex RL

Standard RL optimizes a linear objective: ⟨d^π, r⟩.
Convex RL generalizes this to any F(d^π), where F is non-linear (originally assumed convex—hence the name).

This framework subsumes:
• Imitation
• Risk sensitivity
• State coverage
• RLHF
...and more.

2/n
mircomutti.bsky.social
Walking around posters at @icmlconf.bsky.social, I was happy to see some buzz around convex RL—a topic I’ve worked on and strongly believe in.

Thought I’d share a few ICML papers on this direction. Let’s dive in👇

But first… what is convex RL?

🧵

1/n
mircomutti.bsky.social
This is how we got "A classification view on meta learning bandits", a joint work with awesome collaborators Jeongyeol, Shie, and @aviv-tamar.bsky.social

7/n
mircomutti.bsky.social
The regret bounds depend on an instance-dependent "classification coefficient", which suggests classification really captures the complexity of the problem rather than being a mere implementation tool

6/n
mircomutti.bsky.social
For the latter, we show exploration is *interpretable*, as it is implemented by a shallow decision tree of simple constant action policies, and *efficient*, giving upper/lower bounds to the regret

5/n
mircomutti.bsky.social
Yes, apparently!
A simple algorithm that classifies the latent (condition) with a decision tree (img above right) and then exploits the best action for the classified latent does the job

4/n
mircomutti.bsky.social
Humans typically develop a standard strategy prescribing a sequence of tests to diagnose the condition before committing to the best treatment (see img left). Can we design a bandit algorithm that learns a similarly interpretable exploration but it's also provably efficient?

3/n
mircomutti.bsky.social
Think about a setting in which we aim to converge on the best treatment (action) for a given patient (context) with some unknown condition (latent). The difference between how humans and bandits approach this same problem is striking:

2/n
mircomutti.bsky.social
Would you trust a bandit algorithm to make decisions on your health or investments? Common exploration mechanisms are efficient but scary.

In our latest work at @icmlconf.bsky.social, we reimagine bandit algorithms to get *efficient* and *interpretable* exploration.

A 🧵 below

1/n
mircomutti.bsky.social
Here we have an original take on how to make the best of parallel data collection for RL. Don't miss the poster at ICML, we're curious to hear what y'all think!

Kudos to the awesome students Vincenzo and @ricczamboni.bsky.social for their work under the wise supervision of Marcello.
ricczamboni.bsky.social
🌟🌟Good news for the explorers🗺️!
Next week we will present our paper “Enhancing Diversity in Parallel Agents: A Maximum Exploration Story” with V. De Paola, @mircomutti.bsky.social and M. Restelli at @icmlconf.bsky.social!
(1/N)
Reposted by Mirco Mutti
sologen.bsky.social
What do we talk about when we talk about the Bellman Optimality Equation?

If we think carefully, we are (implicitly) making three claims.

#FoundationsOfReinforcementLearning #sneakpeek
First, we claim that there exists a unique value function $\Vopt$ that satisfies the following equation: For any $x \in \XX$, we have
\begin{align*}
	\Vopt(x) =
	\max_{a \in \AA} \left \{ r(x,a) + \gamma \int \PKernel(\dx' | x, a) \Vopt(x') \right \}.
\end{align*}
This claim alone, however, does not show that this $\Vopt$ is the same as $V^\piopt$.

The second claim is that $\Vopt$ is indeed the same as $V^{\piopt}$, the optimal value function when $\pi$ is restricted to be within the space of stationary policies.
This claim alone, however, does not preclude the possibility that we can find an ever more performant policy by going beyond the space of stationary policies.

The third claim is that for discounted continuing MDPs, we can always find a stationary policy that is optimal within the space of all stationary and non-stationary policies.

These three claims together show that the Bellman optimality equation reveals the recursive structure of the optimal value function $\Vopt = V^{\piopt}$. There is no policy, stationary or non-stationary, with a value function better than $\Vopt$, for the class of discounted continuing MDPs.
Reposted by Mirco Mutti
gautamkamath.com
System is so broken:
- researchers write papers no one reads
- reviewers don't have time to review, shamed to coauthors, use LLMs instead of reading
- authors try to fool said LLMs with prompt injection
- evaling researchers based on # of papers (no time to read)

Dystopic.
mircomutti.bsky.social
Congratulations, well deserved!
Reposted by Mirco Mutti
ewrl18.bsky.social
Mark your calendars, EWRL is coming to Tübingen! 📅
When? September 17-19, 2025.
More news to come soon, stay tuned!
Reposted by Mirco Mutti
timvanerven.nl
Just enjoyed @mircomutti.bsky.social's seminar talk about interpretable meta-learning of contextual bandit types.

The recording is available in case you missed it: youtu.be/pNos7AHGMXw