Luke Marris
@lukemarris.bsky.social
680 followers 170 following 24 posts
Research Engineer at Google DeepMind. Interests in game theory, reinforcement learning, and deep learning. Website: https://www.lukemarris.info/ Google Scholar: https://scholar.google.com/citations?user=dvTeSX4AAAAJ
Posts Media Videos Starter Packs
Reposted by Luke Marris
sharky6000.bsky.social
Hello everyone 👋 Good news!

🚨 Our Game Theory & Multiagent Systems team at Google DeepMind is hiring! 🚨

.. and we have not one, but two open positions! One Research Scientist role and one Research Engineer role. 😁

Please repost and tell anyone who might be interested!

Details in thread below 👇
Reposted by Luke Marris
liusiqi.bsky.social
Frontier models are often compared on crowdsourced user prompts - user prompts can be low-quality, biased and redundant, making "performance on average" hard to trust.

Come find us at #ICLR2025 to discuss game-theoretic evaluation (shorturl.at/0QtBj)! See you in Singapore!
Re-evaluating Open-Ended Evaluation of Large Language Models
A case study using the livebench.ai leaderboard.
shorturl.at
lukemarris.bsky.social
😅😂 Called out!
lukemarris.bsky.social
[🧵8/N] Come see our poster on 2025/04/24 at Poster Location: Hall 3 + Hall 2B #440.
iclr.cc/virtual/2025... #IRL
lukemarris.bsky.social
[🧵7/N] Big thanks to the team @GoogleDeepMind! Siqi Liu (@liusiqi.bsky.social), Ian Gemp (@drimgemp.bsky.social), Luke Marris, Georgios Piliouras, Nicolas Heess, Marc Lanctot (@sharky6000.bsky.social)
lukemarris.bsky.social
[🧵6/N] In summary: Current open-ended LLM evals risk being brittle. Our game-theoretic framework w/ affinity entropy provides more robust, intuitive, and interpretable rankings, crucial for guiding real progress! 🧠 Check it out & let us know your thoughts! 🙏
arxiv.org/abs/2502.20170
lukemarris.bsky.social
[🧵5/N] Does it work? YES! ✅On real data (arena-hard-v0.1), our method provides intuitive rankings robust to redundancy. We added 500 adversarial prompts targeting the top model – Elo rankings tanked, ours stayed stable! (See Fig 3 👇). Scales & gives interpretable insights!
lukemarris.bsky.social
[🧵4/N] But game theory isn't magic - standard methods often yield multiple equilibria & aren't robust to redundancy. Key innovation: We introduce novel solution concepts + 'Affinity Entropy' to find unique, CLONE-INVARIANT equilibria! ✨(No more rank shifts just bc you added copies!)
lukemarris.bsky.social
[🧵3/N] So, what's our fix? GAME THEORY! 🎲 We reframe LLM evaluation as a 3-player game: a 'King' model 👑 vs. a 'Rebel' model 😈, with a 'Prompt' player selecting tasks that best differentiate them. This shifts focus from 'average' performance to strategic interaction. #GameTheory #Evaluation
lukemarris.bsky.social
[🧵2/N] Why the concern? Elo averages performance. If prompt sets are biased or redundant (intentionally or not!), rankings can be skewed. 😟 Our simulations show this can even reinforce biases, pushing models to specialize narrowly instead of improving broadly (see skill entropy drop!). 📉 #EloRating
lukemarris.bsky.social
[🧵1/N] Thrilled to share our work "Re-evaluating Open-Ended Evaluation of Large Language Models"! 🚀 Popular LLM leaderboards (think Elo/Chatbot Arena) are useful, but are they telling the whole story? We find issues w/ redundancy & bias. 🤔
Paper @ ICLR 2025: arxiv.org/abs/2502.20170 #LLM #ICLR2025
Reposted by Luke Marris
sharky6000.bsky.social
Working at the intersection of social choice and learning algorithms?

Check out the 2nd Workshop on Social Choice and Learning Algorithms (SCaLA) at @ijcai.bsky.social this summer.

Submission deadline: May 9th.

I attended last year at AAMAS and loved it! 👍

sites.google.com/corp/view/sc...
SCaLA-25
A workshop connecting research topics in social choice and learning algorithms.
sites.google.com
Reposted by Luke Marris
jeffdean.bsky.social
🥁Introducing Gemini 2.5, our most intelligent model with impressive capabilities in advanced reasoning and coding.

Now integrating thinking capabilities, 2.5 Pro Experimental is our most performant Gemini model yet. It’s #1 on the LM Arena leaderboard. 🥇
Reposted by Luke Marris
sharky6000.bsky.social
Looking for a principled evaluation method for ranking of *general* agents or models, i.e. that get evaluated across a myriad of different tasks?

I’m delighted to tell you about our new paper, Soft Condorcet Optimization (SCO) for Ranking of General Agents, to be presented at AAMAS 2025! 🧵 1/N
lukemarris.bsky.social
[🧵13/N] It is also possible to plot each task's contribution to the deviation rating, enabling to quickly see the trade-offs between the models. Negative bars mean worse than equilibrium at that task. So Sonnet is relatively weaker at "summarize" and Llama is relatively weaker at "LCB generation".
lukemarris.bsky.social
[🧵12/N] We are convinced this is a better approach than Elo or simple averaging. Please read the paper for more details! 🤓
lukemarris.bsky.social
[🧵11/N] Our work proposes the first rating method, “Deviation Ratings”, that is both dominant- and clone-invariant in fully general N-player, general-sum interactions, allowing us to evaluate general models in a theoretically grounded way. 👏
lukemarris.bsky.social
[🧵10/N] A three-player game with two-symmetric models players try to beat each other (by playing strong models) on a task selected by task player incentivised to separate models is an improved formulation. 👍 However Nash Averaging is only defined for two-player zero-sum games. 😭
lukemarris.bsky.social
[🧵9/N] Unfortunately, a two-player zero-sum interaction is limiting. For example, if no model can solve a task, the task player would only play that impossible task, resulting in uninteresting ratings. 🙁
lukemarris.bsky.social
[🧵8/N] This is hugely powerful for two reasons. 1) When including tasks in the evaluation set one can be maximally inclusive: redundancies are axiomatically ignored which simplifies curation for evaluation. 2) Salient strategies are automatically reweighted according to their significance. 💪
lukemarris.bsky.social
[🧵7/N] This approach is provably clone- and dominant-invariant: adding copies of tasks and models, or adding dominated tasks and models, does not influence the rating *at all*. The rating is invariant to two types of redundancies! 🤩 Notably, neither an average nor Elo have these properties.
lukemarris.bsky.social
[🧵6/N] A previous approach, called Nash Averaging (arxiv.org/abs/1806.02643), formulated the problem as a two-player zero-sum game where a model player maximizes performance on tasks by playing strong models and a task player minimises performance by selecting difficult tasks. ♟️
Re-evaluating Evaluation
Progress in machine learning is measured by careful evaluation on problems of outstanding common interest. However, the proliferation of benchmark suites and environments, adversarial attacks, and oth...
arxiv.org
lukemarris.bsky.social
[🧵5/N] Therefore, there is a strategic decision on which tasks are important, and which model is the best. Where there is a strategic interaction, it can be modeled as a game! Model players select models, and task players select tasks. The players may play distributions to avoid being exploited.