Greg Tuck
gregwa.bsky.social
Greg Tuck
@gregwa.bsky.social
Escaping from the other place. Recovering academic (film theory/philosophy) but returning to my first love (biological sciences) and wondering what it's all about. None the wiser, but better informed.
I'm still finding it hard to process that he actually wrote that post. It is either a symptom of serious cognitive decline or shows a level of moral debasement so extreme it can't just be written off as Trump being Trump.
December 15, 2025 at 6:55 PM
I didn't say that and why you should think I did is a bit bizarre. I have no problem believing she has vile politics, but the point is she is using the claim for self promotion.
December 15, 2025 at 6:35 PM
The real joke here is that the porn star is using Reform to create publicity for her not them and they haven't twigged. She'd support Your Party if she thought it would gain her more coverage with her target market but Reform are clearly more on brand.
December 15, 2025 at 12:20 PM
Maybe we have misinterpreted Voltaire's call that 'one must cultivate ones own garden' as simply a rejection of idealising or totalising political philosophies when it is equally a reminder of the active need to cultivate/maintain any political/social formation.
December 15, 2025 at 10:57 AM
Apologies if you have been sent this many times already but here is an analysis that claims inference is highly profitable. It is well beyond my pay grade to judge whether its assumption are reasonable or this is just more AI boosterism. Anyone got a response? martinalderson.com/posts/are-op...
Are OpenAI and Anthropic Really Losing Money on Inference?
Deconstructing the real costs of running AI inference at scale. My napkin math suggests the economics might be far more profitable than commonly claimed.
martinalderson.com
December 15, 2025 at 10:43 AM
Interesting. Well there is certainly a massive difference between the demands and costs of training compared to inference, but I'm not sure how divorced the economics of one from the other is in the real world. I think I'll have to read this again (and very slowly!) before passing further comment.
December 15, 2025 at 10:39 AM
No. Just no. Final warning.
December 15, 2025 at 9:31 AM
latest reports on 'semantic leakage' and work by cryptographers showing the basic impossibility of making this stuff safe from corruption and malicious attack should be the final nail in the coffin, but it won't be. Too much money has been spent, too much greed, fomo and zero sum thinking dominate.
December 15, 2025 at 8:25 AM
So on top of the errors and hallucinations LLMs now seem easy to maliciously corrupt. Add to this the recent work from cryptography demonstrating filtering malicious prompts is doomed to failure and it's pretty damning isn't it. Its a tech that can't Iive up to the promise hiding behind sunk cost.
December 14, 2025 at 11:11 PM
It's the Tony Hancock fantasy from the Blood Donor episode where he claims to be "100% Anglo Saxon with a dash of Viking"
December 14, 2025 at 10:59 PM
While massively subsidised maybe, but I wonder how much general use LLMs will have once there is any attempt to make them profitable? Also even if they remain give aways that usefulness might have a shelf life as they seem fairly easy to sabotage and hard to patch.
December 14, 2025 at 10:47 PM
Evidence that massive wealth magnifies rather than ameliorates anxieties over aging and death and does nothing to shield you from fashionable idiocy,
December 14, 2025 at 10:35 PM
Gary Marcus in his latest substack provides pretty damning (and referenced) evidence that LLMs are eminently corruptible. Even if they work now in ways one thinks are reliable enough one can't assuming that is going to continue. This isn't just about hallucination or errors but manipulation.
December 14, 2025 at 10:22 PM
Do you follow Gary Marcus? He has a substack just out on the growing evidence of corruptibility in LLMs that is pretty damming. They seem to be capable of subliminal learning such that weird correlations from one machine can be feed into another giving control over the results. They are not safe.
December 14, 2025 at 10:12 PM
and No in that the investment in fusion has been a few tens of billions whereas AI has got to be knocking on a trilion
December 14, 2025 at 8:25 PM
I have a feeling GenAI is going to be like nuclear fusion, a technology with amazing world changing promise, if only we can work out the minor details. Half a century later we are closer to fusion energy, but we still need to just work out the minor details.
December 14, 2025 at 7:26 PM
The site is shit but I like the restricted choice as it stops you desperately scrolling. Also I've seen two fantastic series (Slow Horses, Severance), four enjoyable ones (Bad Sisters, Silo, Ted Lasso, Murder Bot) and a few decent movies which compared to Netflix or Prime isn't a bad hit rate.
December 14, 2025 at 4:37 PM
I remember it well from the 70s. From a time when heavyweight journalism was still a mainstay of popular television.
December 14, 2025 at 11:38 AM
And all the gold points to his pathological narcissism being at the extreme end of the 'grandiose' variety.
December 14, 2025 at 11:26 AM
It's the sneaky way they made them look exactly like the sort of glasses you would associate with perverts, just to put you off the scent, that is dastardly clever part.
December 13, 2025 at 12:07 AM
When the AI bubble burst there is a fair chance it will take crypto with it as massive debts gets called in that will demand proper cash rather than digital IOUs. I fear the madness will get a lot worse before that and the mess afterwards will be grim, but there will be a better political space.
December 12, 2025 at 2:22 PM
They have as much chance of their glorified auto-complete getting to AGI as racehorse breeders getting so good one of their mares gives birth to a steam engine. It's just snake oil, to keep up share value and maintain the fantasy that CEOs can do without the real people who actually create value.
December 11, 2025 at 10:58 PM
and they appear to be suspended in thin air without any material means of support rather like their stock valuations.
December 11, 2025 at 6:17 PM
Of course it is inevitable but I think part of the issue is a growing sense that nowadays only nepo babies stand a chance of getting a break, especially in the arts. The days of working class kids having the time and money (usually by being on the dole) to develop their creativity seem long gone.
December 11, 2025 at 12:19 PM