Pedro Santos
pedrosantospps.bsky.social
Pedro Santos
@pedrosantospps.bsky.social
PhD student at @istecnico.bsky.social working on sequential decision-making and reinforcement learning.

https://ppsantos.github.io/
The paper can be found here: arxiv.org/pdf/2409.15128
arxiv.org
May 3, 2025 at 8:46 AM
We provide lower and upper bounds on the mismatch between the finite and infinite trials formulations for GUMDPs, as well as empirical results to support our claims, highlighting how the number of trajectories and the structure of the underlying GUMDP influence policy evaluation.
May 3, 2025 at 8:34 AM
We show that the number of trials plays a key role in infinite-horizon GUMDPs, and the expected performance of a given policy depends, in general, on the number of trials.
May 3, 2025 at 8:34 AM
We contribute the first analysis on the impact of the number of trials, i.e., the number of randomly sampled trajectories, in infinite-horizon GUMDPs (considering both discounted and average formulations).
May 3, 2025 at 8:34 AM
The general-utility Markov decision processes (GUMDPs) framework generalizes the MDPs framework by considering objective functions that depend on the frequency of visitation of state-action pairs induced by a given policy.
May 3, 2025 at 8:34 AM