Lukas Schäfer
@lukaschaefer.bsky.social
280 followers 160 following 50 posts
www.lukaschaefer.com Researcher @msftresearch.bsky.social; working on autonomous agents in video games; PhD Univ of Edinburgh ; Ex Huawei Noah’s Ark Lab, Dematic; Young researcher HLF 2022
Posts Media Videos Starter Packs
Pinned
lukaschaefer.bsky.social
Our textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches” has sold out! Another print round with minor corrections is in production with @mitpress.bsky.social and coming 👏

An errata of these corrections and the updated book PDF can already be found at www.marl-book.com
lukaschaefer.bsky.social
📚🧵1/7 It is finally here!! Only one more week until the print release of our textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches” with @mitpress.bsky.social!

What you get, why you should be interested and more, all below in a short 🧵👇
lukaschaefer.bsky.social
Fingers crossed it’ll actually live up to the hype accumulated over all these years 🙏
lukaschaefer.bsky.social
Thanks for sharing, hadn’t seen this before and definitely plan to catch up!
Reposted by Lukas Schäfer
kale-ab.bsky.social
🇨🇦 Heading to @rl-conference.bsky.social next week to present HyperMARL (@cocomarl-workshop.bsky.social) and Remember Markov (Finding The Frame Workshop).

If you are around, hmu, happy to chat about Multi-Agent Systems (MARL, agentic systems), open-endedness, environments, or anything related! 🎉
Remembering Markov poster.
Reposted by Lukas Schäfer
eugenevinitsky.bsky.social
Will be at ICML and looking to hire a postdoc to help us scale up and deploy RL in self-driving. So, hit me up to chat.
lukaschaefer.bsky.social
The Edinburgh RL Reading group is back with a fresh new website 👏
Anyone is welcome to attend!
rl-agents-rg.bsky.social
Hello world! This is the RL & Agents Reading Group

We organise regular meetings to discuss recent papers in Reinforcement Learning (RL), Multi-Agent RL and related areas (open-ended learning, LLM agents, robotics, etc).

Meetings take place online and are open to everyone 😊
lukaschaefer.bsky.social
Love these curated and shareable feeds on here. Such a good feature to give more control to the users and community to make the experience what they want it to be rather than leaving it in the control of the platform!
lukaschaefer.bsky.social
Eugene is awesome! — if you are interested in autonomous driving and RL, and New York sounds like an exciting place for a postdoc then this is an amazing opportunity! 👇
eugenevinitsky.bsky.social
Hiring a postdoc to scale up and deploy RL-based planning onto some self-driving cars! We'll be building on arxiv.org/abs/2502.03349 and learn what the limits and challenges of RL planning are. Shoot me a message if interested and help spread the word please!

Full posting to come in a bit.
Robust Autonomy Emerges from Self-Play
Self-play has powered breakthroughs in two-player and multi-player games. Here we show that self-play is a surprisingly effective strategy in another domain. We show that robust and naturalistic drivi...
arxiv.org
Reposted by Lukas Schäfer
eugenevinitsky.bsky.social
Hiring a postdoc to scale up and deploy RL-based planning onto some self-driving cars! We'll be building on arxiv.org/abs/2502.03349 and learn what the limits and challenges of RL planning are. Shoot me a message if interested and help spread the word please!

Full posting to come in a bit.
Robust Autonomy Emerges from Self-Play
Self-play has powered breakthroughs in two-player and multi-player games. Here we show that self-play is a surprisingly effective strategy in another domain. We show that robust and naturalistic drivi...
arxiv.org
lukaschaefer.bsky.social
We also updated and moved our code exercises which we had built for a summer school to the marl-book github page so they can be easily found at github.com/marl-book/ma...

Thanks @sacha2.bsky.social for the reminder on that! They had previously been hidden inside my GitHub account 😅
GitHub - marl-book/marl-book-exercises: Code exercises for the MARL Textbook
Code exercises for the MARL Textbook. Contribute to marl-book/marl-book-exercises development by creating an account on GitHub.
github.com
lukaschaefer.bsky.social
Our textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches” has sold out! Another print round with minor corrections is in production with @mitpress.bsky.social and coming 👏

An errata of these corrections and the updated book PDF can already be found at www.marl-book.com
lukaschaefer.bsky.social
📚🧵1/7 It is finally here!! Only one more week until the print release of our textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches” with @mitpress.bsky.social!

What you get, why you should be interested and more, all below in a short 🧵👇
lukaschaefer.bsky.social
Awesome to see the slides publicly available, wasn’t aware they are now!

And yes actually, the code exercises are here: github.com/LukasSchaefe...

Good point though, I wanted to migrate them to the book github project. I’ll have a stab at that later!
GitHub - LukasSchaefer/marl-book-exercises: Code exercises for the MARL Textbook
Code exercises for the MARL Textbook. Contribute to LukasSchaefer/marl-book-exercises development by creating an account on GitHub.
github.com
Reposted by Lukas Schäfer
kale-ab.bsky.social
📜🤖 Can a shared multi-agent RL policy support both specialised & homogeneous team behaviours -- without changing the learning objective, requiring preset diversity levels or sequential updates? Our preprint “𝘏𝘺𝘱𝘦𝘳𝘔𝘈𝘙𝘓: 𝘈𝘥𝘢𝘱𝘵𝘪𝘷𝘦 𝘏𝘺𝘱𝘦𝘳𝘯𝘦𝘵𝘸𝘰𝘳𝘬𝘴 𝘧𝘰𝘳 𝘔𝘶𝘭𝘵𝘪-𝘈𝘨𝘦𝘯𝘵 𝘙𝘓” explores this!
lukaschaefer.bsky.social
Thanks, will check it out!
lukaschaefer.bsky.social
That sounds exciting! Are there any recordings or slides available to check this out? 👀
lukaschaefer.bsky.social
Today, I’ll be presenting our work on exploration in MARL using ensembles here! 👇

Multiagent Learning Session
Where: Ambassador Ballroom 1 & 2
When: 14:00 - 14:13

I’ll also present the poster later at the Learn track of the poster session at 15:45 - 16:30
lukaschaefer.bsky.social
At the main conference, I'll be presenting our work on using ensembles of value functions for multi-agent exploration!

I'll be presenting the oral at the Multi-agent Learning 1 session on Wednesday (2:00 - 3:45pm), and the poster after 3:45pm!

Paper: arxiv.org/abs/2302.03439
lukaschaefer.bsky.social
Thanks!

We wanted to have a game with more realistic visuals. We decided on CS:GO mainly since there already existed an open dataset to train on and compare results to prior methods that used the same dataset.

Also, we finished initial experiments actually on the same day as CS2 released 😅
lukaschaefer.bsky.social
Thanks for the encouragement Marc — I’ll look out for you!
lukaschaefer.bsky.social
Thanks to all the co-authors and collaborators!
Logan Jones, Anssi Kanervisto, Yuhan Cao, Tabish Rashid, Raluca Georgescu, David Bignell, Siddhartha Sen, Andrea Treviño Gavito, and first and foremost Sam Devlin

It's been an absolute joy working with this group of kind folks 👏
lukaschaefer.bsky.social
At the Adaptive and Learning Agents Workshop I'll be presenting our comprehensive study on the efficacy of different visual encoders for imitation learning in modern video games.

I'll be presenting the work as a short talk and poster at the ALA workshop on Monday!
lukaschaefer.bsky.social
This has been a long time coming, thanks a lot for my collaborators for all their help! Oliver Slumbers, Stephen McAleer, Yali Du, Stefano V Albrecht, and David Mguni

It's actually going to be my first ever oral presentation, so excited (and nervous) about that 👀
lukaschaefer.bsky.social
At the main conference, I'll be presenting our work on using ensembles of value functions for multi-agent exploration!

I'll be presenting the oral at the Multi-agent Learning 1 session on Wednesday (2:00 - 3:45pm), and the poster after 3:45pm!

Paper: arxiv.org/abs/2302.03439
lukaschaefer.bsky.social
On my way to Detroit for @aamasconf.bsky.social! Looking forward to presenting the last work from my PhD at the main conference, and work from @msftresearch.bsky.social at the Adaptive and Learning Agents Workshop. More info 👇

If you'd like to chat, feel free to DM me!