Eshwar Ram Arunachaleswaran
@epsilonrational.bsky.social
96 followers 350 following 11 posts
Studies Algorithmic game theory and online learning University of Pennsylvania/ Simons institute https://www.seas.upenn.edu/~eshwar/
Posts Media Videos Starter Packs
epsilonrational.bsky.social
A big congratulations and thank you to all my co-authors. Especially @ncollina.bsky.social with whom I've worked in this area for over 4 years, and Jon, who took us under his wing and reoriented us to a geometric view of algorithms
epsilonrational.bsky.social
Ecstatic and deeply honored by this award. I've had great fun thinking about algorithms as strategies for repeated games over the past few years and hope that this highlight will push more researchers to come up with exciting directions in this field! Come to our talk on Monday to learn more!
epsilonrational.bsky.social
We prove minimizing profile swap-regret is necessary & sufficient for non-manipulability and gets NR +PO. Bonus: if all agents minimize it, the dynamics can reach profiles that cannot be realized as Correlated Equilibria by traditional mediators—unlike normal-form games!
epsilonrational.bsky.social
One approach treats polytope games (for eg: Bayesian games, extensive form games) as high-dimensional normal-form games → exponential blowup resulting in a tradeoff between per-round efficiency and convergence rate (O(T/log T) convergence for efficient algorithms)
epsilonrational.bsky.social
In normal-form games, No-Swap-Regret (NSR) algorithms ensure no-regret, non-manipulability, & Pareto-optimality. But extending these guarantees to polytopes (instead of simplexes) is tricky
epsilonrational.bsky.social
Punchline: We design an efficient no-regret algorithm for games with arbitrary polytopal actions—that is simultaneously non-manipulable, Pareto-optimal, and converging at O(√T·Poly(d)), where d is the action space dimension
epsilonrational.bsky.social
Check out our new paper, on optimal algorithmic commitments against a distribution of opponents!
ncollina.bsky.social
New paper with @epsilonrational.bsky.social and Jon Schneider! Say you’re playing a repeated game against an opponent who will best-respond to your algorithm, but you only have a prior over their utility. What algorithm should you deploy to maximize your expected utility? arxiv.org/abs/2412.182...
Learning to Play Against Unknown Opponents
We consider the problem of a learning agent who has to repeatedly play a general sum game against a strategic opponent who acts to maximize their own payoff by optimally responding against the learner...
arxiv.org