Onno Eberhard
@onnoeberhard.com
520 followers 280 following 50 posts
PhD Student in Tübingen (MPI-IS & Uni Tü), interested in reinforcement learning. Freedom is a pure idea. https://onnoeberhard.com/
Posts Media Videos Starter Packs
Reposted by Onno Eberhard
michelapetriconi.bsky.social
I had such a great time helping organize EWRL 2025 with an amazing team 🎉
Loved being part of it and meeting so many passionate reinforcement learning enthusiasts!
@ewrl18.bsky.social
Reposted by Onno Eberhard
maxplanck.de
Truly chuffed for our fearless food physicists @mpipks.bsky.social + collabs from AT @istaresearch.bsky.social, IT & ES who won this year’s Ig Nobel - the #NobelPrize of hearts❤️for cracking the science of perfect pasta !🍝Kudos to all for intrepidly consuming lots of cheese in the name of science!😋
The Secret to a Smooth Pasta Sauce Wins Ig Nobel Prize
Italian researchers studied how the ingredients of the traditional Roman dish cacio e pepe emulsify into a creamy sauce, winning the 2025 Physics Ig Nobel Prize.
www.the-scientist.com
onnoeberhard.com
I wrote a short post on our newest ICML paper addressed at people who are not experts in machine learning. Check it out!
aihub.org
In our latest blog post, @onnoeberhard.com writes about work presented at #ICML2025 on partially observable reinforcement learning which introduces an alternative memory framework - “memory traces”.
aihub.org/2025/09/12/m...
Memory traces in reinforcement learning - ΑΙhub
aihub.org
onnoeberhard.com
A cute little animation: a critically damped harmonic oscillator becomes unstable with integral control if the gain is too high. Here, at K_i = 2, a Hopf bifurcation occurs: two poles of the transfer function enter the right-hand s-plane and the closed-loop system becomes unstable.
Reposted by Onno Eberhard
ewrl18.bsky.social
📣Registration for EWRL is now open📣
Register now 👇 and join us in Tübingen for 3 days (17th-19th September) full of inspiring talks, posters and many social activities to push the boundaries of the RL community!
PheedLoop
PheedLoop: Hybrid, In-Person & Virtual Event Software
site.pheedloop.com
Reposted by Onno Eberhard
gmartius.bsky.social
I am going to present the poster during the next poster session. 11am Wed.
Poster W #707
cansusancaktar.bsky.social
✨Introducing SENSEI✨ We bring semantically meaningful exploration to model-based RL using VLMs.

With intrinsic rewards for novel yet useful behaviors, SENSEI showcases strong exploration in MiniHack, Pokémon Red & Robodesk.

Accepted at ICML 2025🎉

Joint work with @cgumbsch.bsky.social
🧵
Reposted by Onno Eberhard
eugenevinitsky.bsky.social
I really, really like this paper and as an open question, would love to see it tested on more memory benchmarks
onnoeberhard.com
I am in Vancouver at ICML, and tomorrow I will present our newest paper "Partially Observable Reinforcement Learning with Memory Traces". We argue that eligibility traces are more effective than sliding windows as a memory mechanism for RL in POMDPs. 🧵
Reposted by Onno Eberhard
claireve.bsky.social
Onno and I will be presenting our poster at # W1005 tomorrow (Wed) morning.
He made a great thread about it, come chat with us about POMDP theory :)
onnoeberhard.com
I am in Vancouver at ICML, and tomorrow I will present our newest paper "Partially Observable Reinforcement Learning with Memory Traces". We argue that eligibility traces are more effective than sliding windows as a memory mechanism for RL in POMDPs. 🧵
onnoeberhard.com
This is joint work with @claireve.bsky.social and Michael Muehlebach. If you are at ICML, please come to our poster tomorrow morning (W-1005, Tuesday, 11am-1:30pm). Paper, code, and more can be found at onnoeberhard.com/memory-traces.
Partially Observable Reinforcement Learning with Memory Traces · Onno Eberhard
ML & Mathematics
onnoeberhard.com
onnoeberhard.com
Memory traces are trivially simple to implement, and we ran some experiments that demonstrate that they are an effective drop-in replacement for sliding windows ("frame stacking") in deep reinforcement learning.
onnoeberhard.com
However, if we allow larger values of 𝜆, then we do find environments where memory traces are considerably more powerful than sliding windows!
onnoeberhard.com
Our second result goes the other way: when 𝜆 < 1/2, then there is no environment where memory traces are more efficient than sliding windows. In other words, if 𝜆 < 1/2, then learning with sliding windows and memory traces is equivalent!
onnoeberhard.com
Using this result, we can finally compare learning with sliding windows to learning with memory traces! Our first result shows that there is no environment where sliding windows are generally more efficient than memory traces (even when restricting to 𝜆 < 1/2).
onnoeberhard.com
The "resolution" of a function class is given by its Lipschitz constant. We thus consider the function class ℱ = {𝑓 ∘ 𝑧 ∣ 𝑓 : 𝒵 → ℝ, 𝑓 is 𝐿-Lipschitz}. This allows us to bound the metric entropy. (The constant 𝑑_λ is the Minkowski dimension of 𝒵 if 𝜆 < 1/2.)
onnoeberhard.com
Without forgetting, the learning is intractable: it is equivalent to keeping the complete history. However, to distinguish histories that differ only far in the past, we need to "zoom in" a lot, as shown here.
onnoeberhard.com
What about memory traces? Here, I am visualizing the space 𝒵 of all possible memory traces for the case where there are only 3 possible (one-hot) observations, 𝒴 = {a, b, c}. We can show that, if 𝜆 < 1/2, then memory traces preserve all information of the complete history! Nothing is forgotten!
onnoeberhard.com
We are interested in efficiently learning an accurate value estimate. Statistical learning theory tells us that efficient learning is easier if the *metric entropy* 𝐻(ℱ) is small. For window memory, the function class ℱ is ℱₘ ≐ {𝑓 ∘ winₘ ∣ 𝑓: 𝒴ᵐ → ℝ}, and the metric entropy is 𝐻(ℱₘ) ∈ Θ(|𝒴|ᵐ).
onnoeberhard.com
We focus on the problem of policy evaluation with offline data where the environment ℰ is a hidden Markov model, and we assume that the observation space 𝒴 is one-hot. Thus, given a function class ℱ, our goal is to find the function 𝑓 ∈ ℱ that minimizes the return error.
onnoeberhard.com
While most theoretical work on memory in RL focuses on sliding windows of observations, winₘ(𝑦ₜ, 𝑦ₜ₋₁, … ) ≐ (𝑦ₜ, 𝑦ₜ₋₁, …, 𝑦ₜ₋ₘ₊₁), we analyze the effectiveness of *memory traces*, exponential moving averages of observations: 𝑧(𝑦ₜ, 𝑦ₜ₋₁, … ) = 𝜆𝑧(𝑦ₜ₋₁, 𝑦ₜ₋₂, … ) + (1 − 𝜆)𝑦ₜ.
onnoeberhard.com
I am in Vancouver at ICML, and tomorrow I will present our newest paper "Partially Observable Reinforcement Learning with Memory Traces". We argue that eligibility traces are more effective than sliding windows as a memory mechanism for RL in POMDPs. 🧵
onnoeberhard.com
This result should thus also transfer to approximate memory traces. However, the connection between memory traces and truncated histories only applies if the forgetting factor lambda is less than 1/2. The case of lambda > 1/2 is more interesting, but the connection to AIS is much less clear to me.
onnoeberhard.com
I believe that this case is indeed closely related to AIS. Our analysis describes a close connection between approximate memory traces and truncated histories. Under some conditions (e.g. gamma-observability), truncated histories constitute approximate information states (if I understand correctly).
onnoeberhard.com
I am not sure if there is a way to relate the case where these conditions are not met to AIS. However, we study the behavior of Lipschitz continuous functions of memory traces, which is closely related to quantizing the space of memory traces.