PhD student in the Human Reinforcement Learning team
Laboratoire de Neurosciences Cognitives Computationnelles
Ecole Normale Supérieure (ENS), Paris, France
We aimed to cover all the foundations of the topic in an accessible manner for a large audience.
It could help set up a bachelor-level curriculum on the topic.
Pre-orders are very key for the fate of books: shorturl.at/Dxbif
We aimed to cover all the foundations of the topic in an accessible manner for a large audience.
It could help set up a bachelor-level curriculum on the topic.
Pre-orders are very key for the fate of books: shorturl.at/Dxbif
I wrote a recap of what we learned each day, for anyone curious about the content or experience ✨
center-decision-sciences.com/feedback/
I wrote a recap of what we learned each day, for anyone curious about the content or experience ✨
center-decision-sciences.com/feedback/
📍 Aug 12, 1:30–4:30pm
We show that experiential value neglect (Garcia et al., 2023) is robust to changes in how options and outcomes are represented. Also, people show reduced sensitivity to losses, especially in comparative decision-making.
📍 Aug 12, 1:30–4:30pm
We show that experiential value neglect (Garcia et al., 2023) is robust to changes in how options and outcomes are represented. Also, people show reduced sensitivity to losses, especially in comparative decision-making.
Si vous souhaitez vous y opposer, c’est possible ! Voici la démarche simple (étape par étape) à faire avant cette date butoir ⤵️
Si vous souhaitez vous y opposer, c’est possible ! Voici la démarche simple (étape par étape) à faire avant cette date butoir ⤵️
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇