Matt Beane
@mattbeane.bsky.social
490 followers 130 following 68 posts
Studying work involving intelligent machines, especially robots. @MITSloan PhD, @Ucsb Asst Prof, @Stanford and @MIT Digital Fellow, @Tedtalks @Thinkers50
Posts Media Videos Starter Packs
mattbeane.bsky.social
Computer-mediated carcinisation
Reposted by Matt Beane
emollick.bsky.social
This includes many of my papers, too. The point I am making is the findings in careful academic research likely represents a lower bound of AI capabilities at this point.
Reposted by Matt Beane
mattbeane.bsky.social
I bet if someone *has* succeeded, it's via spinning up an elicitation-GPT that just drilled you for critical intel, wouldn't let you weasel out via under/overspecified output, then dumped it all back to you in standardized format so you could think faster - basically exporting your extraction algo.
mattbeane.bsky.social
Exactly. If we overheard Dario, Sam, and Demis chatting about certain well known AI critics, I'd be willing to bet they'd be expressing gratitude. Proving a grouch wrong is a real motivator.
Reposted by Matt Beane
danielrock.bsky.social
Hi Everyone!

We're hosting our Wharton AI and the Future of Work Conference on 5/21-22. Last year was a great event with some of the top papers on AI and work.

Paper submission deadline is 3/3. Come join us! Submit papers here: forms.gle/ozJ5xEaktXDE...
forms.gle
mattbeane.bsky.social
Exciting new hobby project in the offing related to AI and skill. Involves a childhood passion, a wild leap into the unknown, made real via an order from Amazon just now. Will be 100% cool, I will be documenting things, sharing eventually. Feels like April 2023 again!
mattbeane.bsky.social
The Silo is so good. Just superb. This generation's answer to the BSG remake.
Reposted by Matt Beane
rodneyabrooks.bsky.social
My hobby horse. You can simulate a rocket all you want, and use more energy on computation than the actual rocket would, but you won't get to orbit until you ignite rocket fuel. What if all the energy we are spending on simulating learning is not the juice we really need to make intelligence?
Reposted by Matt Beane
simonwillison.net
Here's my end-of-year review of things we learned out about LLMs in 2024 - we learned a LOT of things simonwillison.net/2024/Dec/31/...

Table of contents:

    The GPT-4 barrier was comprehensively broken
    Some of those GPT-4 models run on my laptop
    LLM prices crashed, thanks to competition and increased efficiency
    Multimodal vision is common, audio and video are starting to emerge
    Voice and live camera mode are science fiction come to life
    Prompt driven app generation is a commodity already
    Universal access to the best models lasted for just a few short months
    “Agents” still haven’t really happened yet
    Evals really matter
    Apple Intelligence is bad, Apple’s MLX library is excellent
    The rise of inference-scaling “reasoning” models
    Was the best currently available LLM trained in China for less than $6m?
    The environmental impact got better
    The environmental impact got much, much worse
    The year of slop
    Synthetic training data works great
    LLMs somehow got even harder to use
    Knowledge is incredibly unevenly distributed
    LLMs need better criticism
    Everything tagged “llms” on my blog in 2024
Reposted by Matt Beane
teevan.bsky.social
In 2024 we learned a lot about how AI is impacting work. People report that they're saving 30 minutes a day using AI (aka.ms/nfw2024), and randomized controlled trials reveal they’re creating 10% more documents, reading 11% fewer e-mails, and spending 4% less time on e-mail (aka.ms/productivity...).
Reposted by Matt Beane
emollick.bsky.social
Independent evaluations of OpenAI’s o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark don’t think it o3 is AGI)
mattbeane.bsky.social
Just *one* of the reasons that Blindsight was ahead of its time. Way ahead.
mattbeane.bsky.social
Massive congrats!! So excited to check it out.
Reposted by Matt Beane
rgmcgrath.bsky.social
Join me by the fireside this Friday with Matt Beane as we dive into one of today’s biggest workforce challenges: upskilling at scale. 📈

Linke below to hear the full discussion on Friday, December 13 at 11 am EST!

linktr.ee/RitaMcGrath

@mattbeane.bsky.social
mattbeane.bsky.social
I propose a workshop.

Most engineers/CS working on AI presume away well established, profound brakes on AI diffusion.

Most social scientists presume away how AI use could reshape those brakes.

Let's gather these groups, examine these brakes 1-by-1, make grounded predictions.
Reposted by Matt Beane
emollick.bsky.social
Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed

Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it
mattbeane.bsky.social
Grateful for the opportunity to visit and learn from the professionals at the L&DI conference. And very glad to hear you found my talk so valuable, Garth! Means a lot.
garthgilmour.bsky.social
In Dublin for the National Learning & Development Conference.

Some insightful opening remarks, followed by an absolutely stonking keynote by @mattbeane.bsky.social. Crystallised a lot of my worries around preserving expertise in software engineering during the age of GenAI. I have reading to do.
A speaker at the National Learning & Development Conference. A speaker at the National Learning & Development Conference A speaker at the National Learning & Development Conference A speaker at the National Learning & Development Conference
Reposted by Matt Beane
tomwilliams.phd
I made an HRI Starter Pack!

If you are a Human-Robot Interaction or Social Robotics researcher and I missed you while scrolling through bsky's suggestions, just ping me and I'll add ya.

go.bsky.app/CsnNn3s
mattbeane.bsky.social
Wrote a little something on this in 2012, though I didn't anticipate the main reason for hiring such workers - training data.

www.technologyreview.com/2012/07/18/1...
The Avatar Economy
Are remote workers the brains inside tomorrow’s robots?
www.technologyreview.com
mattbeane.bsky.social
David Meyer (v.) /ˈdeɪvɪd ˈmaɪ.ər/

To attribute complex, intentional design or deeper meaning to simple emergent behaviors of large language models, especially when such behaviors are more likely explained by straightforward technical constraints or training artifacts.
mattbeane.bsky.social
They did NOT. Wow. Sign of the times.

And I can verify on your rule! I was so flabbergasted and honored. Your feedback was rich and so helpful. Remain grateful.
mattbeane.bsky.social
I remember *treasuring* the previews. I'd fight to get there on time. Was part of the thrill.

But ads? F*ck that noise. Seriously, straight up evil.