yilundu.bsky.social
@yilundu.bsky.social
Excited to share our work at NeurIPS on how to effectively learn new tasks from very few demonstrations!

We invert demonstrations into the latent space a compositional set of generative models, allowing us to quickly learn new tasks substantially different than training tasks.
Learning new tasks with imitation learning often requires hundreds of demos. Check out our #NeurIPS paper in which we learn new tasks from few demos by inverting them into the latent space of a generative model pre-trained on a set of base tasks.

avivne.github.io/ftl-igm/
December 6, 2024 at 4:21 PM