Jeffrey F Steiner
banner
jfs.bsky.social
Jeffrey F Steiner
@jfs.bsky.social
Tech guy with a JD and an GTAW welder trying to read all the scifi-fantasy in existence. All of it. Used to do molecular biology. Living in Minnesota by way of Boston, Moscow, London and Munich
There’s plenty of businesses that have been doing ML for ages, and it’s evolved into LLM now. They’re boring, though. They aren’t claiming to be able to “solve physics” or whatever. They just make new and better products and services. They don’t get the press.
December 8, 2025 at 5:06 AM
Andreessen is about selling investment. As long as people invest in AI, it’s immaterial whether any of it works or is profitable.
December 8, 2025 at 5:04 AM
I meant what I wrote about Andreessen. Some folks sell the hype, not the product. At present, most people talking about AI are selling the surrounding hype. There are also companies who sell actual LLM/ML products with real value.
December 8, 2025 at 5:04 AM
I honestly assumed this was an AI bot all along. Then I checked Twitter, and nope. He authentically sounds like this. Just promoting his business, and I suppose, learning.
December 8, 2025 at 5:01 AM
All the way through? Only once. I’ve caught fragments now and then on cable.
December 8, 2025 at 4:59 AM
Perhaps I misunderstood. I took your response as indicating you’ve written LLM code.
December 8, 2025 at 4:53 AM
You may have been using some of the tools, but I’m not talking about using TensorFlow or whatever. I mean writing the TensorFlow code itself. You have contributions to one of these packages?
December 8, 2025 at 4:47 AM
Shipping code isn’t the same as writing it, and I absolutely acknowledge using an LLM is a genuine art, but it’s not the same as creating the LLM-generating code itself.
December 8, 2025 at 4:43 AM
There’s a lot of highly modular code that either works or it doesn’t. It’s amendable to LLM-assistance.
December 8, 2025 at 2:57 AM
I suspect LLMs have made some some of my coworkers morph from coders who write our software unto coders that write automated QA testing. I don’t think they care either way.
December 8, 2025 at 2:57 AM
Weird that fluoroquinolones aren’t on the list.
December 8, 2025 at 2:50 AM
I was kayaking right outside whatever you call the outflow of the Ballard locks and it was just nonstop surfacing sealions shaking their heads with a MUNCH and letting the other 98% of a fish float away. Not to tempt fate, but are there any seal-derived zoonotic diseases?
December 8, 2025 at 2:32 AM
I blame whoever first started calling LLM an AI. There’s got to be hundreds of great LLM ideas going unfunded because the $$$ selling LLM product isn’t nearly as good as the $$$ selling AI hype. As evidenced by junior Andreessen.
December 8, 2025 at 2:27 AM
They don’t even finish the salmon. They aim for the liver in a bite and then let it go. Sea lions are environmentally irresponsible.
December 8, 2025 at 2:12 AM
As someone who’s personally seen the problem - kayak torpedo bays.
December 8, 2025 at 2:10 AM
Ooo, I like that point. If an LLM could reason, it would have no need to rely on external code to provide accurate answers.
December 8, 2025 at 2:04 AM
Adding numbers of arbitrary length seems pretty elementary to me. Why can’t an LLM use such a basic procedure?
December 8, 2025 at 1:41 AM
If it COULD reason, you could define racism and tell it to knock it off.
December 8, 2025 at 1:40 AM
The AI will stop generating racist output as soon as there is no racist input, implicit or explicit.
December 8, 2025 at 1:39 AM
As an example, the AI is not racist. The AI has no concept of race, nor does it form racist intent. It merely generates “racist”‘output because of the patterns in the training data.
December 8, 2025 at 1:39 AM
Yes, and this is driven by associating the patterns of words making up a query with a training set similarly expanded into word associations. That’s why it sounds so human - it’s based on human content.
December 8, 2025 at 1:39 AM
An LLM cannot do that without understanding the meaning of words. An LLM has no understanding of a word, only the patterns in which words are used relative to other words.
December 8, 2025 at 1:34 AM
I’d certainly incorporate this:

bsky.app/profile/wqsa...
(I’d say that the ability to learn an algorithm then apply it to arbitrary arguments is a prerequisite for reasoning)
December 8, 2025 at 1:34 AM
Well then, I guess an LLM can reason, along with most software in existence.
December 8, 2025 at 1:30 AM
That’s literally half the software I work with on a day to day basis. Complicated inputs are processed based on a set of logical steps to (hopefully) yield a valid result. Hell, Microsoft Word’s grammar check meets this definition.
December 8, 2025 at 1:29 AM