Tim Gadanidis
banner
tim.gadanidis.ca
Tim Gadanidis
@tim.gadanidis.ca
linguist working on language and work in the restaurant service context. luddite. 兔子爸爸。he/him/il/lui
Seems like most of their other lyrics don't touch explicitly on political themes at all though
January 8, 2026 at 5:02 PM
I was typing this up before you deleted, but if still concerned, based on the first track of their 2023 release "second souffle", "l'union fait la force", their politics don't seem too bad (eg "unity makes strength / hate in our society / violence toward communities / racism is not tolerated")
January 8, 2026 at 5:02 PM
Sure, but particular ways of using any linguistic form, new or old, index particular social meanings and I don't see what's out of bounds about negotiating/contesting those.
September 25, 2025 at 2:44 PM
I wouldn't say that success of the transformer architecture generally makes good evidence about the utility of LLMs specifically but I see where you are coming from now.
August 9, 2025 at 4:22 AM
Oh ok! Yeah that's a transformer model. (en.wikipedia.org/wiki/Transfo... I would define an LLM as an application of the transformer architecture to text specifically. Your coworker was maybe trying to simplify or draw an analogy or something.
Transformer (deep learning architecture) - Wikipedia
en.wikipedia.org
August 9, 2025 at 4:21 AM
Just to be clear, I'm not trying to debate them, just curious about what might make an LLM useful for this task. It could be a multimodal model or something. It sounds like we are all on the same page regarding the ethics of large-scale LLM deployment so I don't think we need to attack them over it.
August 8, 2025 at 11:13 PM
Large language models are trained on text. They are called "large" because they are trained on vast amounts of text, which in practice usually means internet data. In any case it seems a stretch to call a model only trained on images a "language" model. Do you just mean it's a transformer model?
August 8, 2025 at 10:40 PM
Can you say more about what the LLM is doing here? I'm not understanding the added benefit of a model trained on massive amounts of text from the Internet for tumour diagnosis (as opposed to a model trained just on tumour data), but I may be misunderstanding what you mean.
August 8, 2025 at 8:28 PM
I've been mentally referring to this as the "servile little creep" voice (after bsky.app/profile/laur...). Troubling to imagine the inner life of someone who responds well to it
it's not like in the top 50 concerning things about AI but why do they all default to talking like a servile little creep
August 8, 2025 at 1:28 AM
Reposted by Tim Gadanidis
We don't know what a real AI company looks like yet. We haven't seen one yet. We won't until the day one flips on the ads, and then they all will. Then we'll know what we're in for.
July 31, 2025 at 4:54 AM