Stuart Gray
banner
sgray.bsky.social
Stuart Gray
@sgray.bsky.social
He/Him. AI Wrangler. Web Geek. F1 Fan. All views my own.

🤖 AI, LLMs, GenAI, NLP
🐍 Python Dev
🚀 Indie Hacker
🎮 Game Dev, ProcGen, Unity, C#
🏎️ F1 Fan
🇬🇧 UK Based

🦣 mastodonapp.uk/@StuartGray
✖️ x.com/StuartGray (inactive)
No, not what I mean’t.

Cameras lack the response time & sensitivity of human eyes (and they’re really only a baseline, we should be aiming for better).

The best example is sudden changes in brightness & contrast e.g. bright lights, dark tunnels, bright sun going in/out of clouds
December 2, 2025 at 8:40 PM
It’s not one or the other, and never has been.

Current tech needs a best-of-blend of sensors working together to avoid each of their weaknesses.

Cameras aren’t remotely close to human eye capability, and show no sign of reaching that level any time soon.
December 2, 2025 at 6:03 PM
As a non-journalist, doesn’t this have parallels with other domains with deep specialisation?

It kind of feels like there should be some existing rules or guidelines for being general or specific?

E.g. Medics > Doctors > Surgeons > Cosmetic Surgeons
December 2, 2025 at 3:14 PM
Reposted by Stuart Gray
This chart is helpful ... I guess? One thing this thread has made clear is that people's extremely intense opinions about AI are not the outgrowth of a clear understanding.
Im definitely oversimplifying the science, but its a subset-set relationship. A (an?) LLM is a set of machine learning decision-making algorithms

This is a chart from @colin-fraser.net
December 1, 2025 at 10:14 PM
Reposted by Stuart Gray
If you mention the water cost in books, and the fact that there's a huge industry in shipping and warehousing unused text books and scam "bestsellers", someone will inevitably come out of the woodwork to say children can't learn from screens and justify the waste 🫠
December 1, 2025 at 10:08 AM
Hmm, yeah, great line and he’s not wrong about the situation he’s describing, but it’s important to distinguish between ProcGen used well vs. poorly.

Everything he says he likes can be found in say, Dwarf Fortress, *the* iconic ProcGen game, that’s done well because the Devs care about it.
November 30, 2025 at 11:46 PM
I think the main issue with any automation is that the better it gets, the fewer people you need to do the pointing.

If AI keeps improving short of AGI, ultimately we’re heading for an explosion of solo-run or people light orgs, without necessarily enough customers to support them.
November 30, 2025 at 9:00 AM
No, not in the slightest, and I strongly suspect the majority of the most vocal anti-ai proponents are indirectly using & benefiting from some form of AI on a daily basis without even knowing about it.

In real-life they’re a tiny minority at present.
November 29, 2025 at 10:08 PM
That’s true, although my understanding is it was mostly template based sentences & paragraphs that were written by humans, and then had stats/numbers/qualifiers inserted for things like finance & sport, which were then automatically combined to make an article.
November 28, 2025 at 1:01 PM
The pre-CharGPT numbers on the graph also look odd. It’s slightly clearer in the original report, but either the scaling is off or the totals add up to more than 100%

And that’s before the fact that it’s claiming 5-10% of content was AI written *before* chatGPT even launched 🤔
November 28, 2025 at 9:49 AM
Here’s a copy of the original report. They acknowledge controversy around AI detection reliability, but use it any way.

An article is classed as AI is more then 50% of it was classed as AI written.

graphite.io/five-percent...
More Articles Are Now Created by AI Than Humans
AI-generated content is as good or better than content written by humans. It is often hard to distinguish whether content is created by AI vs. a human. We seek to evaluate the prevalence of article co...
graphite.io
November 28, 2025 at 9:49 AM
Because there no reliable way to determine AI vs. Human sources text, and the graph is made up.

I don’t have access to the article, but I’d be curious to know how they claim to be able to detect AI text reliably at scale when no one else can.
November 28, 2025 at 9:29 AM