drokeby.bsky.social
@drokeby.bsky.social
Reposted
We are re-deriving Ibn Khaldun from first principles. That's the stage we're at and it's not a great indication of where we're headed...
In a loose folksy way I’ve started to believe that shit falls apart approximately every 80 years because the generation that knew True Horror dies off and the young folks left don’t comprehend the stakes.
I blame the passing of the WW2 generation. Some people didn't have a grandparent who would kick their ass for talking up Nazis and it shows.
December 20, 2025 at 5:22 AM
Reposted
… and he SERVED ✨
December 6, 2025 at 1:56 AM
Reposted
the wild part — this probably doesn’t have anything to do with AI or LLM architecture

e.g. the gospels of Matthew, Mark, Luke & John are all rephrasings of the same stories

Rabbinic teaching involved repetitive rephrasing

oral traditions in general are less about precise recitation
Physics of Language Models: Part 3.1

If you show a fact to an LLM in pre-training once, it’ll memorize the form but not the fact itself.

but if you (synthetically) rephrase the text several times, it’ll memorize the fact

arxiv.org/abs/2309.14316
Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
Large language models (LLMs) can store a vast amount of world knowledge, often extractable via question-answering (e.g., "What is Abraham Lincoln's birthday?"). However, do they answer such questions ...
arxiv.org
November 16, 2025 at 4:59 PM
Reposted
Musk is a parasite on the future; he preys upon the imagination of others, taking any vision of a better world and depreciating it by hawking options on a transparently fake facsimile.

He is capitalism's full-throated assault on the utopian imaginary.
November 11, 2025 at 8:50 PM
Reposted
What people seem to feel and not express:

1—AI continues to accelerate at doing well at the metrics.
2—The metrics are incredibly lossy with respect to many important parts of human creative endeavors.
3—We have no collective plan on how replacement of human work will retain what the metrics lose.
November 8, 2025 at 10:54 PM
Reposted
To me, the field of A.I. is a branch of Philosophy, not Science. I would even call it “Applied Philosophy”.
November 8, 2025 at 9:56 AM
Reposted
Intelligent, sure. Sentient/sapient, no, there's lots of those.

I had the thought a while back that part of what's so weird about LLMs is that they have some of the qualities that make humans different from other animals, but none of the ones we have in common with them
November 2, 2025 at 6:27 AM
Reposted
New incredible detail here: ICE says a match in its facial recognition app Mobile Fortify is a "definitive" determination of a person's status, and that this overrides birth certificates. This is an app ICE is using in the field to scan people

www.404media.co/ice-and-cbp-...
October 29, 2025 at 3:03 PM
Reposted
As I turn half a century old, I'm v happy to share a new touchdesigner tutorial. Recreating one of my old works Webcam Piano. Motion detection with frame differencing, MIDI, POPs, Python, GLSL and more.
Also shout out to David Rokeby!👋
www.youtube.com/watch?v=ZVHf...
August 28, 2025 at 7:09 PM
Reposted
Bergson: "We instinctively tend to solidify our impressions in order to express them in language. ... we are now standing before our own shadow: we believe that we have analyzed our feeling, while we have really replaced it by a juxtaposition of lifeless states which can be translated into words."
September 4, 2025 at 2:40 AM
In fact over the past many decades we have slid most jobs towards bullshit jobs as we chase efficiency, corporate discipline, workforce interchangeability…
some jobs really are bullshit and you can automate them
Have the consulting firms finally figured out that a LLM can in fact replace their core product offering: 22-year-olds with Ivy League degrees who don’t know anything other than how to make PowerPoints?
August 3, 2025 at 6:27 PM
Reposted
To express what can not be statistically extrapolated from previous expressions, to express what is *improbable.*

This calls for Improbablism in art.
July 25, 2025 at 5:09 PM
Reposted
Continuing from the idea that AI will impact art as much as photography did, challenging the artist not only to express, but to express what has never been expressed before.

To express that which is not already in the datasets used in machine generation.
July 25, 2025 at 5:09 PM
Reposted
they're extremely weird and interesting and the entire industry is working as hard as it can to obscure that fact
yup. we found a way to package the entire written output of humanity into a mystically mindless babbler operating in a trillion dimensions and we're like "oh, rad, can it write a cover letter?'
July 17, 2025 at 7:56 PM
Reposted
There's a perspectival thing going on where we imagine that any catastrophe that produced *us* can't, after all, have been a real catastrophe. "K-T event? Felix culpa."

The actual lesson to draw from all this is more like "we suck at predicting the consequences of communications technologies."
July 6, 2025 at 10:18 PM
Reposted
Thinking about this critical moment when there will be shifts in productivity due to AI and how we need to forefront that that extra productivity should enhance our collective lives, not the wallets of the already rich.
June 11, 2025 at 6:05 PM
The most significant, life-changing events in my life were very low probability events. Being able to distinguish these from impossibilities is beyond crucial.
Research shows language models struggle to distinguish impossible from improbable events, often assigning higher probabilities to nonsensical sentences. This highlights AI limitations in context comprehension, raising concerns over reliability in key applications. https://arxiv.org/abs/2506.06808
Not quite Sherlock Holmes: Language model predictions do not reliably differentiate impossible from improbable events
ArXiv link for Not quite Sherlock Holmes: Language model predictions do not reliably differentiate impossible from improbable events
arxiv.org
June 10, 2025 at 4:58 PM
Reposted
Research shows language models struggle to distinguish impossible from improbable events, often assigning higher probabilities to nonsensical sentences. This highlights AI limitations in context comprehension, raising concerns over reliability in key applications. https://arxiv.org/abs/2506.06808
Not quite Sherlock Holmes: Language model predictions do not reliably differentiate impossible from improbable events
ArXiv link for Not quite Sherlock Holmes: Language model predictions do not reliably differentiate impossible from improbable events
arxiv.org
June 10, 2025 at 4:50 PM
Reposted
people who want to curve the arc of the future should probably consider actually understanding the things being made and the people who are doing it
May 31, 2025 at 12:15 AM
Reposted
not to be histrionic but I kind of feel like the confluence of Trumpism, algorithmic social media, and LLMs have placed language and the fundamental proposition that words have meaning under unprecedented strain and it’s freaking me out more than a bit
May 30, 2025 at 1:26 PM
Reposted
May 27, 2025 at 5:14 AM
Reposted
May 13, 2025 at 3:24 PM
I strongly agree. We are desperately in need of a strong middle path that is capable of holding cogent critique of AI, and curiosity and surprise at its capabilities in their mind at the same time.
To elaborate a bit: the quality of AI discourse both on X and here is bad because of broad enforcement of ideological purity. On X it’s “scale is all you need, H100s go brrrr,” on Bluesky it’s “technology bad, engineers should stay in their lane.”
April 27, 2025 at 7:10 PM
Reposted
IMO in every field of the humanities since about 1990 doing primarily 'deflationary' or 'demystifying' work about the discipline or its object rather than spending your life doing something else has been extremely cowardly and moderately evil
April 12, 2025 at 12:55 AM
Reposted
March 26, 2025 at 10:42 PM