Justin Goodwin
jgbos.bsky.social
Justin Goodwin
@jgbos.bsky.social
I mean. It was 10 years ago. Things were different 😇
January 14, 2026 at 3:18 AM
The other possible X-Factor is that Drake showed he can learn from his mistake’s this year. It’s possible the farther he goes the better he’ll get at playoff football
January 12, 2026 at 4:33 AM
Awesome. Done. My 13yo daughter loves fowl language and will be excited to read this.
January 8, 2026 at 2:02 AM
Holy Hand Grenade?
January 1, 2026 at 6:04 AM
This was my dad’s favorite cake (and hated it as a kid). I don’t think I’ve ever heard anyone talk about this outside of his bdays. Now I want to go find some, thanks for triggering some memories of him
December 25, 2025 at 6:16 AM
If I say “Hi, how are ____” you are confidant of the next word. But if I say “This is ___” there are a lot of possibilities. LLMs have to choose the next word, then the next. Context does help improve the distribution of next words.
December 23, 2025 at 2:05 PM
Researchers dive into transformers or training details, but that is the model and optimization, not the mechanism. They are just a simple next word predictor. The harder part is that they actually predict a distribution of possible next words, and they have to choose a word.
December 23, 2025 at 2:01 PM
They are admittedly a lot more complex than that. They are a fixed parametric math model with some randomness. It would be more like giving a coin sorter a hundred different coins and tell it to predict the next coin. Then give it a Canadian quarter and force it to still predict.
December 23, 2025 at 1:33 PM
If you care about the model and optimization details this is a barebones model: github.com/karpathy/nan...
GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs.
The simplest, fastest repository for training/finetuning medium-sized GPTs. - karpathy/nanoGPT
github.com
December 23, 2025 at 1:09 PM
The simplest explanation is they a nonlinear autoregressive model en.wikipedia.org/wiki/Autoreg...
Autoregressive model - Wikipedia
en.wikipedia.org
December 23, 2025 at 1:02 PM
Hard to find a good article that doesn’t get bogged down in technical details. LLMs are just a mathematical model to predict the next word based on a list of input words. Errors are due to random guessing when they are uncertain. Other details are just about the model and optimizing. Not intelligent
December 23, 2025 at 1:00 PM
Wolfie!! Both my kids have that wolf. One of their favs.
December 20, 2025 at 8:20 PM
I miss corn Taylor swift :)
November 23, 2025 at 1:28 AM
It’s great for coaching soccer to 4th grade boys. “6-7 yards apart please!”
November 4, 2025 at 11:01 PM
Not to mention they all forgot how to drive!
October 30, 2025 at 3:26 AM
Wow. 2001 was the last time they were in the ALCS? I remember loving that team with Ichiro and Edgar Martinez.
October 11, 2025 at 5:13 AM
Just finished a whole package this evening just now
October 1, 2025 at 4:21 AM
Thanks Katie. Now Betelgeuse is back in my mind. Endlessly looking at brightness charts now 😂😂
September 2, 2025 at 3:22 AM
I mean… you aren’t far from the truth. Its basic model is to just predict the next word in a sentence (not smart), and the question/answer datasets are mostly curated from AI bros.
August 9, 2025 at 1:29 PM