simsa0
banner
simsa0.bsky.social
simsa0
@simsa0.bsky.social
A dishwasher. The other times I sit by the fire and offer them passing ghosts some coffee.

Blog : https://simsa01.wordpress.com/
I subscribe to the suggestion that Westeners need to keep their souls exclusive (and exquisite) due to their religion. To me, I live with AI since the late 1980s (game engines). I don't see reasons to"fear" it & therefore see no need to prescribe a sanitized nomenclature for talking about it.
14-14
January 10, 2026 at 8:29 PM
Joi Ito, "Why Westerners Fear Robots and the Japanese Do Not" (2018) www.wired.com/story/ideas-...
13-14
Why Westerners Fear Robots and the Japanese Do Not
The hierarchies of Judeo-Christian religions mean that those cultures tend to fear their overlords. Beliefs like Shinto and Buddhism are more conducive to have faith in peaceful coexistence.
www.wired.com
January 10, 2026 at 8:29 PM
of our common humanity, rather suggests that the authors are quite unsure of what it may (and can) mean to be a human being in a technological complex world made by ourselves.

• I leave you with an outlook on the "soul of robots" that I find far more adequate and far more to my liking:
12-14
January 10, 2026 at 8:29 PM
• The authors not only fail to recognize the underlying problems in the AI-human interaction; they fail to understand what Anthropomormizing is about and what is involved with it. Merely dropping the term and suggesting that that, in itself, were somehow awful, and in particular: a betrayal >
11-14
January 10, 2026 at 8:29 PM
human being but a machine, then how can we be certain that we ourselves are "human" at all and not some machine displaying a complex behaviour towards ourselves as observers (a variant of the Brian in a vat-problem)?
10-14
January 10, 2026 at 8:29 PM
ascribe intelligence, intentionality (etc.) to that entity. But then we cannot be certain that what we claim to be a human person in front of us is not in reality a sophisticated machine (Other Mind Problem in Epistemology). And when we cannot be certain that what in front of us is not a >
9-14
January 10, 2026 at 8:29 PM
the output of the machine in comparison to that of a human being. But the point is not that the machine is supposed to be intelligent (etc.) when it can "trick" the observer, that is banal. The interesting point is that given a certain complexity of behaviour in front of us we cannot but >
8-14
January 10, 2026 at 8:29 PM
• With the above in mind, what people usually have in mind (and fear) with regard to the Turing Test turns out rather awry. The Turing Test is usually seen as a critierion of when to ascribe "intelligent behabiour" to a machine, viz., when a human observer cannot detect a difference between >
7-14
January 10, 2026 at 8:29 PM
turn our time, affection, care, and attention to: Motorbikes, machines, art, craft, etc. In all these cases it would be awful to suggest that we cannot interact with these objetcs that way because we then were about to "antropomorphize" them.
6-14
January 10, 2026 at 8:29 PM
This is not something defective on the part of human beings, but something very common and natural to us. Then there is a mechanism that Sherry Turkle once summarized neatly as follows: "We nurture what we love, but we [also] love what we nurture." That is, we tend to bond with whatever we >
5-14
January 10, 2026 at 8:29 PM
interact with their environment. Such ascriptions occur towards pets, nature, deities, etc. One reason for this may be that human beings cannot but ascribe consciousness, intentionality, soul, intelligence, etc., towards something that displays a sufficiently complex behaviour towards them.
4-14
January 10, 2026 at 8:29 PM
no means no means belittling human beings. Humans are not degraded when spoken of (ad about) in functionl and statistical terms.

• Describing a machine's reaction to human input in terms of "Anthropomorphisms" is one instance or application of a rather typical way human beings in general >
3-14
January 10, 2026 at 8:29 PM
of the AI-human-interaction. Allow me to offer a few points here.

• Every description of understanding and communication, suitably generalized, describes the acts in funtional terms in which the "humanenness" of the human agent is lost. That is on purpose, a feature of using models, and by >
2-14
January 10, 2026 at 8:29 PM
Sorry, but I find this jeremiad rather typical for a certain fashion of reasoning in the humanities. Not just that the authors cannot come up with more plausible ways of talking about AI, their allegation of Anthropomorphism is not even a useful argument in the assessment of the state >
1-14
January 10, 2026 at 8:29 PM
Reposted by simsa0
January 10, 2026 at 12:23 AM
(Already in March 2025 I wrote about how the U.S. might create an Art. 5 incident: simsa01.wordpress.com/2025/03/25/a...
and about the folly of some military experts that Russia might test NATO by pinching Estonia: simsa01.wordpress.com/2025/03/25/r... )
January 6, 2026 at 1:34 AM
Thank you.
January 5, 2026 at 3:50 PM
Reposted by simsa0
P.S. The tweet from Ukraine.
January 4, 2026 at 11:38 PM