Socrathustra
Socrathustra
@socrathustra.bsky.social
Software engineer with a philosophy degree
Metal drummer/singer
Enjoyer of good things
No thanks
April 29, 2024 at 11:59 PM
There are a lot of strong statements about what they do and don't know when in fact it's not certain that our own knowledge uses a different mechanism. If the statistical models encapsulate enough of the context of when A follows B, that's almost a "why."
March 30, 2024 at 2:57 PM
I don't think those two things are anywhere near as different as you think they are.
March 30, 2024 at 1:18 AM
There are enough problems with AI that need solving to keep from needing to invent more problems. It makes raising any kind of actual issues much harder, because that critique gets lost in a bunch of baseless claims.
March 30, 2024 at 1:17 AM
While I agree people are overeager, what would you have done that would have produced the high quality, indispensable product we have today in the form of Google Maps and such which would have avoided the period of time where it suggested driving into lakes?
March 29, 2024 at 10:38 PM
We're not so different. Kids raised in a cult would answer basic questions about the world wrongly and with confidence. I'm not saying they're fundamentally the same as us, only that they are likely engaging in similar activities in a narrowly defined area.
March 29, 2024 at 10:35 PM
To be clear, I think there may be some fundamental behaviors which are missing from LLMs which could be added in future iterations on the concept. Even so, I think the success of LLMs and generative AI is in part because they are doing many of the same things we are in their own ways.
March 29, 2024 at 10:33 PM
They certainly don't have subjective experience and don't "think," but understanding (or an analogous process) may be extricable from thought. A lot of scenarios are reducible to "spicy autocomplete," and my suspicion is that more complex scenarios are just the ghost pepper of autocomplete.
March 29, 2024 at 10:31 PM
Frankly I think most anti-AI reactionaries do not understand the term "understanding" and need to study epistemology and the philosophy of language. I would recommend Kuhn and Wittgenstein to see how knowledge is fuzzy and socially constructed.
March 29, 2024 at 9:50 PM
Arguing efficiency in this manner isn't a good angle. You'd have to ask how many resources are expended for an equal quantity of art (if art could be quantified). I suspect this calculation would work in the data center's favor, and it will only get more favorable over time.
March 29, 2024 at 9:45 PM
I'm familiar. My take is that what we call understanding is a lot more tenuous and probabilistic than what many believe, and LLMs are at the very least doing something similar.
March 29, 2024 at 9:31 PM
The alarmism around LLMs is grossly ignorant of epistemology and far too proud of their own ability to know things. It is far more likely that some analogy to understanding occurs than that it is able to produce correct-ish answers by coincidence. There's still plenty of room to improve of course.
March 29, 2024 at 9:26 PM
What do you believe excludes LLMs from knowing* things? Is it different from knowing Newtonian physics in the age before relativity? I feel the AI experts may be ignorant of epistemology.

*given the lack of the subjective experience of knowing things, any "knowing" is an analogous process.
March 29, 2024 at 9:17 PM
There's still a strong case that these systems do in fact know things in some analogous sense, but they don't have, say, good ways of handling lower confidence yet. I think it's a lot more likely that they know* things than that they produce completely fake information.
March 29, 2024 at 9:12 PM