Tim Kellogg
@timkellogg.me
6.9K followers 720 following 11K posts
AI Architect | North Carolina | AI/ML, IoT, science WARNING: I talk about kids sometimes
Posts Media Videos Starter Packs
Pinned
timkellogg.me
Does AI get bored?

I gave them nothing to do, just to see what happens

one thing — they devolve into a repetitive “collapse” state, I guess you could call it boredom

but some break out into math & poetry on their own, I didn’t expect which ones that would be

timkellogg.me/blog/2025/09...
Does AI Get Bored?
timkellogg.me
timkellogg.me
WHO’S TESTING THE TESTS?
timkellogg.me
is that like a mobile data home?
timkellogg.me
there’s a few ways of interpreting it:

AGI — models have unexploited overhang and benchmarks are getting saturated

startups — smaller models are more agile and easier to experiment on, easier to find and exploit more markets

i don’t actually think pretraining is dead, there’s just more options
timkellogg.me
yeah, can think of a few people who would be perfectly content in an amish community, as long as they were permitted to convert an outhouse into an office with a standing desk, computer, and feed power from a water wheel
timkellogg.me
no longer just webscale, MongoDB is AGI scale!
timkellogg.me
i’d say it’s a recent development, but thinking on it i’m not sure that’s true
timkellogg.me
i mean, again.. you’re not wrong
timkellogg.me
oh god, this is true isn’t it..
whee.bsky.social
OpenAI is adding more whips and Anthropic is adding more brain probes

Who will win?
timkellogg.me
it’s really sparse signal, so you have to do a crap ton of compute to get anything out of it. LoRA RL is better bc there’s fewer parameters to guide, so less data
timkellogg.me
also, i only see openai doing this. no one else seems to be following
Reposted by Tim Kellogg
jay.bsky.team
New reddit alternative built on AT
nooki.me
nooki @nooki.me · 17h
(1/4) I'm soft launching nooki.me to get some early traction and feedback on what to improve. Excited to see how this platform grows!
timkellogg.me
unless the need is for inference. or RL. i mean, we’ve been spending equal amounts on RL already. i only see that number going up
timkellogg.me
which, ngl i’ve kinda been saying this for a while, most of AI bsky has been saying it. these models are so advanced we don’t know what to do with them. software needs to catch up
timkellogg.me
i’m sure that’s not entirely true, but they didn’t promise scaling, nor are they hinting it in any way at all

all their AGI efforts are going into finding and exploiting overhang, untapped potential in the models
timkellogg.me
to be clear, i’m saying that bc training a new GPT would presumably be very compute intensive.

yet they’re doing other things with their compute instead 🤔
timkellogg.me
recently OpenAI (in this order)

- said GPT-6 will be about memory
- said GPT-6 is coming “soon”
- said they were about to release some compute intensive features
- released Pulse and then Sora app, both compute intensive

so (pretrain) scaling is dead?
timkellogg.me
apparently the 30 hour was from some unknown guy and even the Anthropic people wouldn’t quote it
timkellogg.me
Elastic(search) acquires Jina AI

Jina makes embedding models, very good ones!

ngl this really excites me. I’m not sure i’ve seen an embedding model company join up with a database company, but i’ve wanted exactly this for a long time

www.elastic.co/blog/elastic...
Elastic and Jina AI join forces to advance open source retrieval for AI applications
We are thrilled to announce that Elastic has joined forces with Jina AI, a pioneer in open source multimodal and multilingual embeddings, reranker, and small language models....
www.elastic.co
timkellogg.me
ha i keep hearing that last part but is it actually true? or just a coincidence
timkellogg.me
ok tbh i forget that people think that’s the right way to use them
timkellogg.me
nope, that was my original point. it hasn’t always been this important, things have definitely changed. like, you can’t gaslight me into believing they weren’t, i was there
timkellogg.me
ignorant? okay.. uh. so are you saying it was always exactly as important as it is now? because that’s what it sounds like
timkellogg.me
eh, pretty sure that changed after the 2016 election

go back far enough and it’s just nerds in chat rooms

hard to make the case that it was always consequential
timkellogg.me
what’s changed is it’s consequential now