Cristian
banner
cristian-s.bsky.social
Cristian
@cristian-s.bsky.social
Gave up 👍
Reposted by Cristian
Only in America could a Ketamine-infused South African oligarch call a navy captain, astronaut and sitting US senator "a traitor" for supporting our allies and standing up to Russia...
March 10, 2025 at 4:04 PM
Reposted by Cristian
Few understand this

jvns.ca/blog/2014/06...
January 17, 2025 at 7:10 PM
Reposted by Cristian
"I think we have this idea that we're gonna make friends with bad guys and bend them to our will, and really we're just...friends with bad guys."
--The Diplomat, S2 E4

(I've been binge-watching this, and it is excellent.)
January 18, 2025 at 2:44 AM
Reposted by Cristian
But I heard someone say LLMs are world models?
We live in the ✨ world of tomorrow ✨
January 17, 2025 at 12:55 PM
Reposted by Cristian
Tree at night by firelight. Backyard in Natick, MA. 01-02-2025.

#DansDayOutdoors
January 2, 2025 at 11:47 PM
Reposted by Cristian
January 17, 2025 at 1:13 PM
Reposted by Cristian
Brandon is a wonderful research colleague and I could not endorse enough trying to work with Brandon
📢 My team at Meta (including Yaron Lipman and Ricky Chen) is hiring a postdoctoral researcher to help us build the next generation of flow, transport, and diffusion models! Please apply here and message me:

www.metacareers.com/jobs/1459691...
Postdoctoral Researcher, Fundamental AI Research (PhD)
Meta's mission is to build the future of human connection and the technology that makes it possible.
www.metacareers.com
January 7, 2025 at 4:16 AM
Reposted by Cristian
I have a draft of my introduction to cooperative multi-agent reinforcement learning on arxiv. Check it out and let me know any feedback you have. The plan is to polish and extend the material into a more comprehensive text with Frans Oliehoek.

arxiv.org/abs/2405.06161
A First Introduction to Cooperative Multi-Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) has exploded in popularity in recent years. While numerous approaches have been developed, they can be broadly categorized into three main types: centralized ...
arxiv.org
January 7, 2025 at 4:25 PM
Reposted by Cristian
Instead of listing my publications, as the year draws to an end, I want to shine the spotlight on the commonplace assumption that productivity must always increase. Good research is disruptive and thinking time is central to high quality scholarship and necessary for disruptive research.
December 20, 2024 at 11:18 AM
Reposted by Cristian
A few others that resonated:

"We are trying to solve a problem that's too big and refuse to concretize it to make it actually tractable."

"I think RL's too elegant. … It draws us all in with its elegance, and then we get hit with all the other issues that you probably heard from everyone else."
December 25, 2024 at 11:21 PM
Reposted by Cristian
It’s scathing critique season, this time for reinforcement learning. We need this, the science cannot get better without it.

Usual suspects: training brittleness (over reliance on hyperparameter tuning), bad & slow sims, overemphasis on generality, LLMs dominating discourse, tabula rasa RL is hard
E61: Neurips 2024 RL meetup Hot takes: "What sucks about RL?"
What do RL researchers complain about after hours at the bar?  In this "Hot takes" episode, we find out!  
Recorded at The Pearl in downtown Vancouver, during the RL meetup after a day of Neurips 2024.
December 25, 2024 at 11:21 PM
Reposted by Cristian
Chomsky, Varofakis & Greenwald issued joint declaration calling for Panama, Greenland & Canada to negotiate, trade land for peace and not provoke nuclear armed madman...

Lol
December 24, 2024 at 9:36 AM
Reposted by Cristian
the feds: stop trying to turn this guy into some cool antihero with a badass public image

also the feds: *treat him like they’ve captured the joker*
December 19, 2024 at 7:20 PM
Reposted by Cristian
Catch my poster tomorrow at the NeurIPS MLSB Workshop! We present a simple (yet effective 😁) multimodal Transformer for molecules, supporting multiple 3D conformations & showing promise for transfer learning.

Interested in molecular representation learning? Let’s chat 👋!
December 15, 2024 at 12:32 AM
Reposted by Cristian
Fresh off the presses:
In "Learning on compressed molecular representations" Jan Weinreich and I looked into whether GZIP performed better than Neural Networks in chemical machine learning tasks. Yes, you've read that right.

TL;DR: Yes, GZIP can perform better than baseline GNNs and MLPs. It can ..
Learning on compressed molecular representations
Last year, a preprint gained notoriety, proposing that a k-nearest neighbour classifier is able to outperform large-language models using compressed text as input and normalised compression distance (...
pubs.rsc.org
November 21, 2024 at 12:58 PM
Reposted by Cristian
December 15, 2024 at 9:14 PM
Reposted by Cristian
Ilya Sutskever calls data the "fossil fuel of AI" – the finite power source that kickstarted the rapid initial rise, but that is now running out and needs to be replaced with more sophisticated and sustainable methods.
x.com/_jasonwei/st...
December 13, 2024 at 10:25 PM
Reposted by Cristian
Most in the industry have been talking abt this for upwards of a year. *Current form* of pre training has been dead. We're at a token wall. But scale is here to stay. I'm excited by:
-pretraining with human (?) prefs, merging pre and post training
-online exploration for (synthetic) data collection
December 14, 2024 at 3:06 AM
Reposted by Cristian
Is this really the correct citation for the Gaussian distribution? Why is the citation count so low?
December 13, 2024 at 7:37 PM
Reposted by Cristian
We will run out of data for pretraining and see diminishing returns. In many application domains such as in the sciences we also have to be very careful on what data we pretrain to be effective. It is important to adaptively generate new data from physical simulators. Excited about the work below
Neural surrogates can accelerate PDE solving but need expensive ground-truth training data. Can we reduce the training data size with active learning (AL)? In our NeurIPS D3S3 poster, we introduce AL4PDE, an extensible AL benchmark for autoregressive neural PDE solvers. 🧵
December 14, 2024 at 1:02 PM
Reposted by Cristian
I think math books should use more descriptive titles:

- Precalculus: a modern approach with a lot of calculus

- An introduction to probability with only a gentle amount of gaslighting

- Category Theory
December 14, 2024 at 10:04 AM
Reposted by Cristian
A struggle session is when a tankie tries to get up off his futon
December 11, 2024 at 10:43 PM
Reposted by Cristian
I guess we have Boltz-1 to thank for pushing all these other AF3 clones to adopt fully open-source licenses and models.
December 13, 2024 at 4:55 PM