Marlos C. Machado
@marloscmachado.bsky.social
820 followers 220 following 47 posts
Assistant Professor at the University of Alberta. Amii Fellow, Canada CIFAR AI chair. Machine learning researcher. All things reinforcement learning. 📍 Edmonton, Canada 🇨🇦 🔗 https://webdocs.cs.ualberta.ca/~machado/ 🗓️ Joined November, 2024
Posts Media Videos Starter Packs
marloscmachado.bsky.social
This paper has now been accepted @neuripsconf.bsky.social !

Huge congratulations, Hon Tik (Rick) Tse and Siddarth Chandrasekar.
marloscmachado.bsky.social
📢 I'm happy to share the preprint: _Reward-Aware Proto-Representations in Reinforcement Learning_ ‼️

My PhD student, Hon Tik Tse, led this work, and my MSc student, Siddarth Chandrasekar, assisted us.

arxiv.org/abs/2505.16217

Basically, it's the SR with rewards. See below 👇
marloscmachado.bsky.social
2/2: “Conquerors live in dread of the day when they are shown to be, not superior, but simply lucky.”

― N.K. Jemisin, The Stone Sky
marloscmachado.bsky.social
1/2: But there are none so frightened, or so strange in their fear, as conquerors. They conjure phantoms endlessly, terrified that their victims will someday do back what was done to them—even if, in truth, their victims couldn’t care less about such pettiness and have moved on.”
Reposted by Marlos C. Machado
rl-conference.bsky.social
Excited to announce the RLC best paper awards! Like last year, we wanted to highlight the many excellent ways you can do research.
rl-conference.cc/RLC2025Award...
RLC 2025 - Outstanding Paper Awards
rl-conference.cc
marloscmachado.bsky.social
* RLC Journal to Conference Track:*
(Originally published at TMLR)

- Deep RL track (Thu): AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning by S. Pramanik
marloscmachado.bsky.social
* RLC Full Papers:*
(These are great papers!)

- Deep RL track (Thu): Deep Reinforcement Learning with Gradient Eligibility Traces by E. Elelimy
- Foundations track (Fri): An Analysis of Action-Value Temporal-Difference Methods That Learn State Values by B. Daley and P. Nagarajan
marloscmachado.bsky.social
* RLC Workshop Papers (2/2):*
Inductive Biases in RL
sites.google.com/view/ibrl-wo...

- A Study of Value-Aware Eigenoptions by H. Kotamreddy
marloscmachado.bsky.social
* RLC Workshop Papers (1/2):*
RL Beyond Rewards
rlbrew2-workshop.github.io

- Tue 11:59 (spotlight talk): Towards An Option Basis To Optimize All Rewards by S. Chandrasekar
- The World Is Bigger: A Computationally-Embedded Perspective on the Big World Hypothesis by A. Lewandowsi
Workshop on Reinforcement Learning Beyond Rewards: Ingredients for Developing Generalist Agents
rlbrew2-workshop.github.io
marloscmachado.bsky.social
Here's what our group will be presenting at RLC'25.

* Invited Talks at Workshops:*
Tue 10:00: The Causal RL Workshop sites.google.com/uci.edu/crlw...
Tue 14:30: Inductive Biases in RL (IBRL) Workshop
sites.google.com/view/ibrl-wo...
Tue 15:00: Panel Discussion at IBRL Workshop
marloscmachado.bsky.social
RLC starts tomorrow here in Edmonton. I couldn't be more excited! It has a fantastic roll of speakers, great papers, and workshops. And this time, it is in Edmonton 😁

@rl-conference.bsky.social is my favourite conference, and no, it is not because I am one of its organizers this year.
marloscmachado.bsky.social
This was a great long-term effort from @martinklissarov.bsky.social, Akhil Bagaria, and @ray-luo.bsky.social, and it led to a great overview of the ideas behind leveraging temporal abstractions in AI.

If anything, I think this is very useful resource for anyone interested in this field!
martinklissarov.bsky.social
As AI agents face increasingly long and complex tasks, decomposing them into subtasks becomes increasingly appealing.

But how do we discover such temporal structure?

Hierarchical RL provides a natural formalism-yet many questions remain open.

Here's our overview of the field🧵
Reposted by Marlos C. Machado
rl-conference.bsky.social
To align better with workshop acceptance dates, 𝐑𝐋𝐂 𝐢𝐬 𝐞𝐱𝐭𝐞𝐧𝐝𝐢𝐧𝐠 𝐢𝐭𝐬 𝐞𝐚𝐫𝐥𝐲 𝐫𝐞𝐠𝐢𝐬𝐭𝐫𝐚𝐭𝐢𝐨𝐧 𝐝𝐞𝐚𝐝𝐥𝐢𝐧𝐞 𝐭𝐨 𝐉𝐮𝐧𝐞 𝟐𝟑𝐫𝐝!
marloscmachado.bsky.social
9/9: I genuinely think AgarCL might unlock new research avenues in CRL, including loss of plasticity, exploration, representation learning, and more. I do hope you consider using it.

Repo: github.com/machado-rese...
Website: agarcl.github.io
Preprint: arxiv.org/abs/2505.18347
marloscmachado.bsky.social
8/9: Well, if you are still interested, you should probably consider reading the paper, but it is interesting to see that most of the agents we considered were able to reach human-level performance only in the most benign settings. And we did use a lot of computing here!
marloscmachado.bsky.social
7/9: Through mini-games, we tried to quantify and isolate some of the challenges AgarCL poses, including partial observability, non-stationarity, exploration, hyperparameter tuning, and the non-episodic nature of the environment (so easy to forget!). Where do our agents "break"?
marloscmachado.bsky.social
6/9: Importantly, this is a challenge problem that forces us to deal with many problems we often avoid, such as hyperparameter sweeps and exploration in CRL.

It is perhaps no surprise that the classic algorithms we considered couldn't really make much progress in the full game.
marloscmachado.bsky.social
5/9: Over time, even the agent's observation will change, as the camera needs to zoom out to accommodate more agents; not to mention that there are other agents in the environment. I'm very excited about AgarCL because I think it allows us to ask questions we couldn't before.
marloscmachado.bsky.social
4/9: AgarCL is an adaptation of agar.io, a game with simple mechanics that lead to complex interactions. It's non-episodic, and a key aspect is that the agent dynamics change as it accumulates mass: It becomes slower, gains new affordances, sheds more mass, etc.
marloscmachado.bsky.social
3/9: AgarCL is our attempt at an environment with the complexity of a "big world" but in a smooth way, where the "laws of physics" don't change. It has complex dynamics, is partially observable, with non-stationarity, pixel-based observations, and a hybrid action space.
marloscmachado.bsky.social
2/9: CRL is often motivated by the idea that the world is bigger than the agent, requiring tracking. We usually simulate this with non-stationarity by cycling through classic episodic problems. I've written papers like this, but it feels too artificial.

arxiv.org/abs/2303.07507
marloscmachado.bsky.social
📢 I'm very excited to release AgarCL, a new evaluation platform for research in continual reinforcement learning‼️

Repo: github.com/machado-rese...
Website: agarcl.github.io
Preprint: arxiv.org/abs/2505.18347

Details below 👇
marloscmachado.bsky.social
This is great, thanks for sharing! We will read your paper carefully.
marloscmachado.bsky.social
7/7: We just scratched the surface here, but I think this could be the beginning of something interesting; that might be relevant to research questions ranging from safety in RL all the way to cognitive sciences.

Again, here's the preprint by Tse et al.: arxiv.org/abs/2505.16217
Reward-Aware Proto-Representations in Reinforcement Learning
In recent years, the successor representation (SR) has attracted increasing attention in reinforcement learning (RL), and it has been used to address some of its key challenges, such as exploration, c...
arxiv.org
marloscmachado.bsky.social
6/7: We also show that, when compared to the SR, the DR gives rise to qualitatively different behavior in all sorts of tasks, such as reward shaping, exploration, & option discovery. Similar to what we did w/ STOMP, sometimes there's value in being aware of the reward function 😁