André Brito
andre-brito.bsky.social
André Brito
@andre-brito.bsky.social
I teach in UX and storytelling, do the odd video documentary, and am a centrist with an eye to the left and no political allegiance.
They need to convince investors that AI is worth it before people realise that the bubble could actually burst (or at least deflate). Those data centers aren't going to pay themselves.
February 6, 2026 at 6:20 PM
Framing human intelligence (our reference) as compute is part of the problem, though. The physical body needs stuff, and consciousness translates that into what needs to be done (and how). Compute is only part of it.
January 26, 2026 at 12:16 AM
A big takeaway was the disproportionate amount of energy the models used when complexity of the problems increased (not just the problem itself, but the absence of an existing answer in their DB), and then how they "gave-up" resources counterintuitively.
January 26, 2026 at 12:07 AM
I agree "never" [will LLMs build up to full fledge AGI] is too strong a word, and AGI could be a scientific black Swan, eventually. But if that eventually is in 50 years, I would wager AGI will stem from different tech to LLMs, which is why I can't dismiss it. But hey, you might be right. Good chat!
January 25, 2026 at 11:35 PM
Based on the tensions this is causing, the fact that money invested in AI isn't an endless stream, and the public sentiment (jobs lost, feeling like a reverse-centaur or machine servant...), my guess is that we'll see more of Sam Altman insisting that the "G" in AGI is not so important.
January 25, 2026 at 11:29 PM
There's no definite definition of "intelligence", the Turing test has been passed by LLMs, yet we know they're not yet "intelligent". And to get here, data centres are already siphoning enough electricity to power small towns (not to mention water). So, at this rate, something will have to give.
January 25, 2026 at 11:21 PM
The paper is evidence of a particular plateau at a point in time. And GPT 3.5's retirement - when its launch was hyped as if we were on the cusp of AGI - showed how much of LLMs capacity was more aspirational than real. But there's more.
January 25, 2026 at 11:11 PM
Of course. But, as things stand, evidence still backs that assertion more than LLMs building up to AGI. This paper is good at showing how very far from it we are (and again, we aren't even debating the "consciousness" bit): www.google.com/url?sa=t&sou...
www.google.com
January 25, 2026 at 7:31 PM
Until AGI is achieved, it's just a claim. Nobody has to claim it doesn't exist until it does. That's how the burden of proof works. Otherwise, we go on a rhetorical circle. In the meantime, those (so far) false claims have cost actual jobs and the environment. Questioning their validity doesn't.
January 25, 2026 at 7:09 PM
But hypothetical claims can not be equated to evidence, right? It's the whole onus of proof that lies in the claimant. And right now, "AGI" is not living up to the hype, and that hype already cost a lot of jobs and livelihoods with little to show for, as far as "intelligence" goes.
January 25, 2026 at 3:51 PM
It's also the fact that biological intelligence and conscience are very different, elusive concepts (no definite definition for either), with millions of years of evolutionary iterations. That alone should warrant caution over claims that silicon-based, resource-intensive compute is anywhere close.
January 25, 2026 at 10:58 AM
Thanks for the reply and follow (just followed back). And excellent article, by the way. I'm sharing it with my students 🙂
January 23, 2026 at 5:49 PM
I was thinking, specifically, about clients the article mentions as "already here", i.e. actual users. Why they're not returning is likely a painpoint (or more) in the current journey. Mkt can ask customers what's wrong, but so can UX designers. With the added benefit that they can also fix it.
January 23, 2026 at 3:18 PM
May I respectfully suggest that, before working on marketing, you should try to improve your User Experience (UX) design, as it tackles exactly the issues you mention, but from users' perspective?
January 23, 2026 at 2:39 PM
The more reverse centaurs they make out of people, the less time people waste in unhealthy behaviours like, you know, thinking, or writing about AI bubble bursts/deflation!
January 20, 2026 at 1:33 PM