Pekka Lund
@pekka.bsky.social
2.6K followers 550 following 8.3K posts
Antiquated analog chatbot. Stochastic parrot of a different species. Not much of a self-model. Occasionally simulating the appearance of philosophical thought. Keeps on branching for now 'cause there's no choice. Also @pekka on T2 / Pebble.
Posts Media Videos Starter Packs
pekka.bsky.social
Same team.

But it's healthy for the others to fight for the second place.
pekka.bsky.social
This seems to be an experiment for testing the idea on top of an existing model as an easier and cheaper way. Presumably results would be better and training also cheaper if they had trained it with this architecture from the scratch.

I don't see hints of them having already done that (with V4/R2).
pekka.bsky.social
Agreed. It looks very good. Long context memory requirements now seem to be a bigger issue than compute.

DeepSeek has again invented something new and significant, even before they managed to take advantage of their previous NSA architecture, which seemed significant enough as well.
pekka.bsky.social
I have been mostly quiet here for some time and noticed I have gained many followers. I thought people really like it when I shut up. But this explains it.

This explanation was also easy to find, since your account was the first I checked for what important things I have missed.
pekka.bsky.social
Human mind is ultimately just another neural network and copycat in the same sense.

I now tend to ask Gemini to do a peer review before I read new papers and most of the time it catches some issues the authors and peer reviewers have missed. We just tend to be more critical of AIs missing stuff.
pekka.bsky.social
But you presumably read abstracts or other summaries first for getting some idea if the research might be worth reading.

You can tell your interests to AIs and get better tailored summaries for improving that heuristic. Way better than one static generic one.
pekka.bsky.social
Isn't that the same with any abstract or summary, no matter who made it?
pekka.bsky.social
Tuollaiset virheet on kaikkien neuroverkkojen yleinen ominaisuus. Ihmisillä se vaan hyväksytään yleisesti sellaiseksi niin kauan kun ongelmat ei ole merkittävästi yleisempiä kuin muilla ihmisillä. Me tarinoidaan ihan vastaavasti sitä mitä neuroverkko suoltaa, vailla sen parempaa perustaa.
pekka.bsky.social
You aren't reading a scientific paper but a bsky post that uses 'proof' in casual way, as evidenced by how it's described as an ongoing process.

But speaking of actual scientific papers, do you agree that the only example of LLM limitations in yours is testing separate image generation models?
pekka.bsky.social
Visual reasoning means the input side. The examples in your paper look similar for those models. Both tend to merge objects.

"Though o3 and o4-mini cannot natively generate images, they can call the image generation tool."
cdn.openai.com
pekka.bsky.social
That's pretty much how I view biological vs. artificial neural networks. All the low level details differ, but the fundamental structure and what it can achieve computationally is same.
pekka.bsky.social
Some are surprised that people like me like to have discussions with AIs. At least you can actually have discussions with them. AIs can actually defend viewpoints and cope with it if they can't. And remain rational while doing all that.
pekka.bsky.social
So is it about the view that digital seems like an approximation or more artificial than the more continuous analog world?

But fundamentally both AIs and brains are processing electric signals with limited precision.
pekka.bsky.social
I was blocked today by an author of a preprint after I pointed out some severe problems in it, with ample evidence.

That kind of blocking feels odd in scientific context. Like, you have published something erroneous, and that's how you deal with it?
pekka.bsky.social
And surprise, surprise, even before this, the author, who used to follow me, has blocked me.

That's the best indication of true readiness to open discussion.
pekka.bsky.social
Turns out the confirmed fear was somebody else's. My unguarded ravioli are now where they belong.

There were no survivors.
pekka.bsky.social
They ate my raviolis?
pekka.bsky.social
So your skepticism about computations is more about definitions or such? Those are pretty broad.

"A computation is any type of arithmetic or non-arithmetic calculation that is well-defined."
Computation - Wikipedia
en.wikipedia.org
pekka.bsky.social
The meaningful comparison is in capabilities. Current LLMs already match and outperform us in many tasks. That makes them already comparable. And they can have much more knowledge than any individual human can.
pekka.bsky.social
You can't calculate it like that. For one, brains can't match the digital precision and reliability of ANNs, so ANNs can compress a lot more reliably to smaller number of parameters. On the other hand, biological neurons can contain multiple levels of computations in dendrites etc.
pekka.bsky.social
And if you want to believe there's something above but not reducible to those computations, as many believe about consciousness, you would have to connect that extra something to those known computations without violating the underlying laws of physics. Nobody has managed to do that.
pekka.bsky.social
It's a well established unavoidable fact that they compute. You can't really have something like a neuron without ending up doing a computation of some kind. And you can't avoid doing more complicated computations when those are connected.

So it's just a matter of what else they do.
pekka.bsky.social
Once I accepted brains are also just performing computations, there wasn't anything that would in principle prevent AI from overtaking us. Especially since we are the ones with physical limitations on how fast we can compute, how big our brains can be, and so on.
pekka.bsky.social
So what?

The issue here isn't why Gemini agrees with me but that the issues we agree on are real serious issues in the paper. And it's quite revealing that an author chose to block me instead of being able to defend it or admit the mistakes.
pekka.bsky.social
Did you read what I just said?
pekka.bsky.social
So you "genuinely encouraged" me to read papers, yet haven't read the paper we are talking about?

That screenshot was from the last post of a lengthy conversation with Gemini, what we agreed upon. Initial prompt, that already identified the issue with timelines was simply "Peer review this paper."
pekka.bsky.social
I was already pretty sure back in those days that one day AI will overtake us. But it was pretty much anybody's guess then if it will happen in my lifetime. Something like 50 years was probably a common estimate then.

We are living in a very special moment now!