James Fairbairn
banner
james.exitmusic.world
James Fairbairn
@james.exitmusic.world
Fledgling corporate shaman. Pay attention, help others, let go. Product ∪ strategy; seeking ways to heal people, society and our biosphere. 🇭🇰🇬🇧 in […]

🌉 bridged from https://mastodon.exitmusic.world/@james on the fediverse by https://fed.brid.gy/
As the political class is fond of reminding us, we’re all in this together. In a few short years they’re going to discover what that really means
December 27, 2025 at 4:28 AM
I’m kidding, obviously. Not having another Wrap to look at is one of the lovely things about this platform :)
December 19, 2025 at 1:52 AM
@phae congrats - must be nice! :)
December 11, 2025 at 1:36 AM
@dlakelan hehe, me too… what do you want to bet the world will mostly be running in ThinkCenter Tiny m710qs in 2045? :)
December 8, 2025 at 5:44 AM
@dlakelan I honestly think, with part of my being at least, that the homelabbers will inherit the earth. That there is a whole family of future professions there when *this* all implodes.

Also, surely there’s someone doing open source smart thermostat/sensor/switch projects? I refuse to believe […]
Original post on mastodon.exitmusic.world
mastodon.exitmusic.world
December 8, 2025 at 5:32 AM
Reposted by James Fairbairn
@james On beauty being relational, here’s some relevant, interesting Bateson from a talk in 1979 titled “What is Epistemology?” His contention is that the more relational, interdependent, and integrated things are, the more beautiful we perceive them to be, and […]

[Original post on social.coop]
December 7, 2025 at 2:23 PM
And that people would rather we improved *this real reality* than replace it with a supposedly better non-real reality that was controlled by a company that was extracting profit from people's experience of it
December 6, 2025 at 6:05 AM
I stuck a one-page version of this on the web over here, in case anyone wants it https://exitmusic.world/ai-researchers-wrong-theory-of-cognition-is-making-us-worry-about-the-wrong
AI researchers' wrong theory of cognition is making us worry about the wrong kind of AI apocalypse
_I originally wrote this (in 18 minutes!) as a stream-of-consciousnessMastodon thread. Thought it might be worth putting it all together here though._ “Cognitive task” is an ontological sleight-of-hand used to obscure the distinction between the way a human would perform the task, and the nature of the task itself. This mask is then used to conflate human cognition with what neural networks do, when in fact neural networks only work similarly to a small subset of animal cognition. For example, doing arithmetic is a “cognitive task” for humans, but nobody (or very few) would argue that a calculator doing the same arithmetic is using cognition to do so. The thing is, animal cognition is inextricably an embodied process. Affect is not a side-effect of cognition but its root. The fact that we have computerised the production of plausibly similar outputs as those from animal cognition only means that we anthropomorphise the process that produces those plausible outputs. We wrongly assign intention and goals to AI models like LLMs because we incorrectly assume the nature of their insides based on their outsides. It is meaningless to talk of AI goals or intent, or at least meaningless to think of them as in any way isomorphic to animal goals or intent, as the mechanism for the production of goals and intent fundamentally does not exist in AI models. This false theory of cognition is extremely dangerous, because it leads us to waste time on fallacies like AGI/superintelligence wiping out humanity through some misplaced intent + agency. In reality the risk is both more proximate and more mundane than that, and is the same risk that has been playing out for at least hundreds of years. We have repeatedly demonstrated our willingness to deploy technologies whose socioeconomic impact we do not understand and cannot forecast, in order to obtain a profit. The AI apocalypse looks much more like an accelerated runaway-IT problem: replacing components of complex socioeconomic infrastructure (that might have previously been driven by people or technology) with AI will cause massive damage. This damage will come from the unpredictable failure modes of systems that depend on certain kinds of AI, that in a context of complexity will cause harmful ripple effects. The damage will be exacerbated by (1) the continued substitution of software for people in decision-making where there is an incentive to delegate accountability to a system that can't be questioned, and (2) the proliferation of software problems that are impossible to diagnose and impossible to fix. The good news about this understanding of the AI apocalypse is that we are not fighting against an emergent superior machine intelligence. We are only fighting the dumbest, greediest instincts our human society produces. And that is something we know how to do. Happy weekend!
exitmusic.world
December 5, 2025 at 7:50 PM
OK goddammit I wrote it here with a slightly different slant so I could avoid the LW ragebait https://mastodon.exitmusic.world/@james/115668538242850545
James Fairbairn (@[email protected])
"Cognitive task" is an ontological sleight-of-hand used to obscure the distinction between the way a human would perform the task, and the nature of the task itself.
mastodon.exitmusic.world
December 5, 2025 at 7:31 PM