Michael Warkentin
michaelwarkentin.com
Michael Warkentin
@michaelwarkentin.com
Haha yeah we were looking for family holiday movies, I noped out of that list pretty quick.
December 23, 2025 at 9:53 PM
But temperature is a thing for a reason.. some level of non-determinism makes them MORE useful in a lot of cases. So perhaps we can trade determinism for being less useful.. is that still providing the business value though?

The answer as always is probably.. it depends. 😶‍🌫️
December 5, 2025 at 1:05 AM
Aren’t there also cases like temperature support being removed from (at least some GPT-5 models)?
December 4, 2025 at 2:19 PM
To be fair Reimer was the only reason there was even a game 7 in that series. Probably should’ve been over in 5.
September 27, 2025 at 9:07 PM
Cool, wasn’t sure if you might be trying bazzite.gg - lotta people seem to like it for gaming
Bazzite - The next generation of Linux gaming
Bazzite makes gaming and everyday use smoother and simpler across desktop PCs, handhelds, tablets, and home theater PCs.
bazzite.gg
August 22, 2025 at 10:48 PM
What distro are you running?
August 22, 2025 at 10:21 PM
Wtf only 6 hockey teams? 😆
August 8, 2025 at 1:30 AM
remember when AI couldn’t count the number of Rs in “strawberry”? that was eight months ago

Wow.. bsky.app/profile/kjhe...
August 8, 2025 at 1:22 AM
May 5, 2025 at 11:04 AM
Do you have the source on this?
March 3, 2025 at 9:46 PM
Join the open source pledge: opensourcepledge.com
Open Source Pledge
Our companies feast year after year at the Open Source table. It's time to settle up.
opensourcepledge.com
February 24, 2025 at 8:33 PM
It also follows immediately after the next biggest hype wave (crypto / web3) and seems like a bunch of hucksters just jumped from one to the other.
January 31, 2025 at 4:19 AM
I’ve seen Matthews score goals in almost every way I thought possible. Pretty sure this is the first diving poke check of a goal!
January 5, 2025 at 2:48 AM
If I’m remembering you and John had some pretty hard stances on preventing training of LLMs on your content, even if that cat seems to be out of the bag already. The code that those LLMs were trained on was also mostly not published for that use (SO responses, open source repos on Github, etc. )
January 3, 2025 at 1:14 AM
Honest question: is there any cognitive dissonance with your view of LLMs used for writing and your use of LLMs for generating code?

You even mention that when copy/pasting code you would credit the person on stack overflow (or whatever) but the LLMs are basically trained on the same data…
January 3, 2025 at 1:11 AM