cedric
@ccolas.bsky.social
220 followers 53 following 17 posts
Building autotelic agents from socio-cultural interactions https://ccolas.github.io/
Posts Media Videos Starter Packs
ccolas.bsky.social
induction vs transduction point holds

with induction you can search because you have a metric to optimise (% train examples correct)

with transduction there is no clear metric to guide search / brute force, so the model needs to get it right? or to come up with a way to guide its own search
ccolas.bsky.social
just checked, on semi-private set ryan got 43 (not that far i admit)
ccolas.bsky.social
ok he did use for loops so he didn't hill climb, but you can 'filter good candidates' by taking the solutions that solves 100% training examples, and submit only these as solutions

with transduction you can't filter

the challenge rules say you can submit only 2 (3?) solutions per pb
ccolas.bsky.social
2) he used program synthesis which allows hill climbing on the % of training examples correct

if the o3 prompt that circulates is correct, the o3 score uses transduction (predicting output grid directly), and you can't hill climb there

you can ensemble, but that doesn't help much for hard pbs
ccolas.bsky.social
this is a misleading comparison for two reasons

1) that guy got 50% on the public test set, which is easier than the private test set where o3 reached the 85%(87?)
ccolas.bsky.social
the official testing procedure is 2 or 3 solutions per problem iirc, don't think chollet would have let them brute force it

it seems they don't use program induction, so they can't hill climb on training examples either
ccolas.bsky.social
hmm is there no bookmarks over here? or did i miss them?
ccolas.bsky.social
hope to see you all at the IMOL worskhop on sunday!
ccolas.bsky.social
in vancouver for @neuripsconf.bsky.social

looking forward catching up with friends and meeting new ones!

reach out to chat about:
> open-ended learning
> intrinsic motivations
> exploration and diversity search
> social and cultural learning
> llm agents
> other?
ccolas.bsky.social
hi Melanie,
we have a cool workshop on intrinsically motivated open-ended learning with a blend of cogsci and ai on dec 15

@IMOLNeurIPS2024 on X

see program here: imol-workshop.github.io/pages/program/
ccolas.bsky.social
oh cool, what's the paper? i've been thinking it could be the case and was wondering who wrote about it
ccolas.bsky.social
balancing exploration and exploitation with autotelic rl

autotelic rl is usually concerned with open-ended exploration in the absence of external reward

how should we conduct an open-ended exploration *at the service* of an external task?

deep rl skills required
ccolas.bsky.social
llm-mediated cultural evolution

we wanna study how llm-based agents can be used to facilitate collective intelligence in controlled human experiment where groups of participant collectively find solutions to problems

this requires some background in cogsci + llms
ccolas.bsky.social
we are recruiting interns for a few projects with @pyoudeyer
in bordeaux
> studying llm-mediated cultural evolution with @nisioti_eleni
@Jeremy__Perez

> balancing exploration and exploitation with autotelic rl with @ClementRomac

details and links in 🧵
please share!