Jonathan Balloch
@balloch.bsky.social
2.4K followers 300 following 120 posts
Robotics PhD Candidate @GeorgiaTech studying #RL and #AI I mostly tweet about #AI, #robots, #science, and #3dprinting My thoughts and opinions are my own. jballoch.com
Posts Media Videos Starter Packs
Pinned
balloch.bsky.social
I am a few days late on this, but I'm proud to say: I have passed my dissertation defense for my PhD! 🎉

I'm so thankful for my advisor @markriedl.bsky.social , my committee, my family, my lab, and all those who have supported me through this. Excited for this next chapter!
balloch.bsky.social
Ooo peak design is legit
balloch.bsky.social
for the record, this is why LLMs have been more widely successful and applicable than, say, vision-language-action models, and why VLAs are catching up: this is a recipe that can be applied very broadly, but only works at a production level if the data domain is VERY thoroughly covered
balloch.bsky.social
The more data you have, the better an embedding space you have, and the more likely your interpolation is to be correct. So you are right in that the something like the answer is probably in the training data, but you are wrong that the exact answer is in the training data or searched for.
balloch.bsky.social
Like many social media discussions, what is missing here is nuance. LLMs, like all generative no-prior ML models, are, effectively, interpolating. But in the case of LLMs, they are interpolating in the space of "next token embedding."
balloch.bsky.social
Fundamentally you can *have* both, but functionally when you optimize for multiple objectives usually only one ends up as the primary. Guzdials article is suggesting that the prior push being so attached to undergrad outcomes is a bad primary objective for K-12 students, which is reasonable...
balloch.bsky.social
Le Chat underrated
remicadene.bsky.social
I absolutely love @MistralAI. Even the free version is super fast and report sources 🤩 The pro version is even better.
Reposted by Jonathan Balloch
ml4x.bsky.social
I think a deeper difficulty in ML is the economy of attention. The hundreds of papers each day released on ArXiv in ML means that a reader needs to resort to heuristics to keep up. Stuff like trust a recommender system, or only read famous authors, or scan for buzzwords.
balloch.bsky.social
Sarah Paine is incredible
balloch.bsky.social
Given whats going on in the world, I think its time to reread Brave New World
Reposted by Jonathan Balloch
chriswolfvision.bsky.social
Example, pre-train (reward free) to map temporal distances into distances in latent space, and then, finetune: map these through a dot product with a latent task description to a reward function.

A couple of refs:

openreview.net/forum?id=YGh...
arxiv.org/abs/2110.02719
arxiv.org/abs/2110.15191
balloch.bsky.social
I know exactly what you mean. Especially for us academic-related folks, our recommended bubble gets ultra tight. my recommendation is to look at some of the "highly followed" topics, which will give a more norm-y feed. But truly BlueSky needs "Trending"
balloch.bsky.social
Depending on precision, that is a crazy price for 2 high quality 6-dof robot arms, to say nothing of them attached as one torso. If the price stays when people start building it you can be sure I'll be one. The Rethink Baxter is a lesson, cumulative error from backlash will be the important thing
balloch.bsky.social
very exciting!
cpaxton.bsky.social
$14k open source humanoid robot upper torso. Writing with a pen on a notebook that you're holding is an impressively challenging task! Also comes with an open, modular, python software stack for robot control and planning.

openpyro-a1.github.io
Reposted by Jonathan Balloch
cpaxton.bsky.social
$14k open source humanoid robot upper torso. Writing with a pen on a notebook that you're holding is an impressively challenging task! Also comes with an open, modular, python software stack for robot control and planning.

openpyro-a1.github.io
Reposted by Jonathan Balloch
eugenevinitsky.bsky.social
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me
balloch.bsky.social
Begs the question: at what point is multi-task training implicit meta learning @chelseafinn.bsky.social
Reposted by Jonathan Balloch
eugenevinitsky.bsky.social
One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies
balloch.bsky.social
super excited to try this out
Reposted by Jonathan Balloch
natolambert.bsky.social
Trying to tell the story behind this explosion of research we are in. An unexpected RL Renaissance.
New talk! Forecasting the Alpaca moment for reasoning models and why the new style of RL training is a far bigger deal than the emergence of RLHF.
YouTube: https://buff.ly/41bVRPp
An unexpected RL Renaissance
New talk! Forecasting the Alpaca moment for reasoning models and why the new style of RL training is a far bigger deal than the emergence of RLHF.
www.interconnects.ai
Reposted by Jonathan Balloch
eugenevinitsky.bsky.social
Easier installation, faster PPO script, new tutorials. The team has put in so much work and I'm excited for y'all to try it.
github.com/Emerge-Lab/g...