Phillie Phonetic
banner
tokenize.bsky.social
Phillie Phonetic
@tokenize.bsky.social
Occasional non-idiot
But it does predate the wide use of generative image models (at least how we have them now), and if it’s a physical letter, who would have submitted it? If it‘s a fake, there had better be a paper trail for how it got there. I’d hope they don’t just toss any old thing into the evidence pile.
December 23, 2025 at 8:10 PM
How was this collection assembled? Regardless as to whether it ends up being a real letter from Epstein, if the DoJ has had it since 2019, it doesn’t bode well for the thoroughness of any initial investigation, as you say.
December 23, 2025 at 8:08 PM
Transcription models, like whisper (which I suspect Otter is using) are great. And I have to give credit where it’s due: Whisper is remarkably good at picking up jargon, unique words, and spoken acronyms
December 21, 2025 at 10:55 PM
The issue I highligted could have career ending consequences. It’s not a bug. It happened with the latest models. This is not a “works on my machine” situation. My concern is actually how many people don’t reject the poor results. That has been the point.
December 21, 2025 at 12:25 AM
Organizations are. My last company absolutely held “you must use AI” company meetings. If you’re not aware this is happening, ask around.

You have seemingly not read closely. I don’t reject LLMs but may do. And their issues are valid, if not the vitriol. But you chose to respond and to evangelize
December 21, 2025 at 12:21 AM
It’s a real problem that happened this week. I have seen the tools around the models improve over time. And they are still not ready. This is why people want to reject them. The promise doesn’t outweigh the issues for many of us at this time. Especially not when it’s being mandated
December 20, 2025 at 11:58 PM
I agree that this is an area of real opportunity. But I’m also a fan of making accessibility features clearly distinct, because their users might be willing to put up with less polish if only because they solve a problem and the benefit is greater than the jank & the tradeoffs are clearer
December 20, 2025 at 11:44 PM
And you know why? Because if the tool changes my work, I bear the responsibility and the risk. Does word do additional work beyond piping out to the foundational model and putting up a “sometimes mistakes are made“ banner?
December 20, 2025 at 11:41 PM
In research, yes, that was the argument. And your reply was, “why not educate others about them?” Which is really missing the point of the problem. I think LLMs have real promise, but when they change my citations when transferring them into a bib file, I am not like, “oops, you scoundrel.” Im livid
December 20, 2025 at 11:37 PM
You did get quoted and have clearly taken umbrage with the idea of carrying water for tech bros. The history here is clear

Anyone in research is seeing LLMs creating an existential threat to the reputational system they rely on & you’re feigning confusion as to why they want to discourage their use
December 20, 2025 at 11:31 PM
Couching that in accessibility distracts from the tradeoffs
December 20, 2025 at 10:12 PM
And finally, I do think NLP as an interface is really promising, especially for accessibility. But you aren’t rolling this out as an accessibility-focused feature, right? This is for every user. It‘s not pitched as one. It’s introduced as being able to do things it can’t do reliably.
December 20, 2025 at 10:12 PM
if youre responsible for the integration of LLMs into word, youre the one responsible for educating users and not externalizing that burden. The problems are real, as you say, but it’s not our job to address the harms. The citation slop will be an issue if only for how much is being generated now
December 20, 2025 at 10:06 PM
And to be clear, snark aside, the person to whom you were replying did open up a path to talking through the upsides. And my point was that your response to that was more productive and it would have been better to start there instead of asking why the poster wasn’t interested in doing the education
December 20, 2025 at 9:56 PM
You then got a flippant response and called “bias.” None of that exchange addressed the problem, but I do expect more care and consideration from someone building it into products. Why would we want to inherit the problem of teaching others about the tools & not make that a problem of the tool maker
December 20, 2025 at 9:40 PM
I would argue that “bias“ is a conversational dead end. As are the hateful comments you’ve gotten. But your initial reply was about the article and misses the mark to my mind. The article was about LLMs creating problems in how we build reputation & credibility in research. Your reply sidesteps that
December 20, 2025 at 9:39 PM
What kind of analysis
December 20, 2025 at 8:55 PM
I think if you had begun this interaction with less of a broadside about bias — a claim that didn’t really address anything having to do with the real issues from the Rolling Stone piece — and instead started from this more nuanced place, you would see more (but not 100%) positive responses
December 20, 2025 at 8:45 PM
It’s not a conspiracy. They just needed to use up all the printer toner. Case closed.
December 19, 2025 at 10:42 PM
lol they all look like Remmick from Sinners
December 19, 2025 at 10:37 PM
I miss it being made. It was fantastic
December 19, 2025 at 7:47 PM
Especially when the architects of the data center construction push are talking about Dyson spheres. Scaling battery production is easier than nuclear and science fiction by a wide margin
December 19, 2025 at 6:38 PM
Yea. And they look you in the eye and dare the other person to deny it
December 19, 2025 at 2:27 AM