glentiki.bsky.social
@glentiki.bsky.social
I have a feeling these might be coming across as reply guy like. This is a special interest area of research for me.
February 2, 2026 at 5:24 PM
arxiv.org/abs/2510.23921

I think you might be miscalculating the impact of those spaces vs religious. This is just an early microcosm of societal evolution.
Breaking the Benchmark: Revealing LLM Bias via Minimal Contextual Augmentation
Large Language Models have been shown to demonstrate stereotypical biases in their representations and behavior due to the discriminative nature of the data that they have been trained on. Despite sig...
arxiv.org
February 2, 2026 at 5:23 PM
AI is ultimately just a pattern matching algorithm. Some observable online space is dominated with that stuff. Whatever the start… they just pattern matched.
February 2, 2026 at 5:13 PM
Time is hard. I think Claude cowork is mismatching local input timestamps vs server generated utc timestamps, leading to out of order message rendering. It’s been pretty frustrating to deal with. Also if it’s llm code, I think llms really struggle with temporal understanding.
February 1, 2026 at 10:05 AM
I never viewed it as that. I used it as I kicked off initial implementation/discovery with it, and edited with it along the way
November 25, 2025 at 5:25 PM
Are you using hooks? Hook based functional components will do funky things if you don’t adhere to the lifecycle rules
October 24, 2025 at 10:09 AM
Reposted
August 7, 2025 at 5:03 PM
How can you reduce the risk of misreading the expensive to read signal? Find lots of cheap correlated signals.

When one cheap signal diverges, you have your first signal the expensive signal has diverged. You should dig a bit deeper.
July 27, 2025 at 8:55 PM
But some signals are cheap to read, and some signals are expensive to read.

And with enough historical correlation you can still safely estimate the correlated signal from the cheaper signal.

This allows prioritising resources to only measure the expensive, when a deviation is detected…
July 27, 2025 at 8:51 PM
A larger working memory means more data to understand cause and effect.

I was always thought to understand history lest I would be doomed to repeat it.

And before you state correlation != causation, you’re true...
July 27, 2025 at 8:48 PM
Dave the diver but instead of diving into the ocean, you’re trying to climb the launderette corporate ladder while running a micro brewery in the evening.
July 9, 2025 at 11:31 PM
I was using it in my last scrappy startup. We had a shoestring budget so we didn’t want to spend on GitHub copilot at the time, when this was free available, and more full featured.

It didn’t seem to do anything unique, it was just an alternate copilot imo, using aws llms
June 30, 2025 at 10:02 PM
Brought to you by the ADHD struggle to charge things
May 5, 2025 at 2:58 PM
Sometimes it’s just a matter of perspective ✨
April 22, 2025 at 5:33 PM
She’s drinking the cool aid 🥁
April 18, 2025 at 11:54 PM