Kirill Lutcenko
banner
lutkir.bsky.social
Kirill Lutcenko
@lutkir.bsky.social
AI Engineer & Back-End Technical Lead. MSc AI.

10y building software, 2y in AI and tech lead roles. Engineering-first views on what works in AI (and what doesn’t).
Same :(

Just created this account a few days ago, subscribed to hundreds of AI and tech leaders, and suddenly fell into the rabbit hole of all the horrors American society is going through right now.

Sending you all my hugs and thoughts from the other side of the ocean 🫂
I should start watching less of these ICE videos on Bluesky. I am not American, I can't do anything about it, and it starts affecting my mental health. So much unnecessary suffering.
January 12, 2026 at 5:32 PM
Just created the most expensive arithmetic calculator ever lol. A team of four agents with two different LLMs under the hood 🤣
January 12, 2026 at 2:23 PM
True. Ten years in the industry, and I still have no idea how (or if) it’s possible to measure engineering productivity solely using hard numeric metrics, without human manager feedback.
Ppl forget that AI doesn’t change some basics:

1. Evaluate on a single metric and engineers game it (we are smart enough)

2. Code is increasingly promoted by AI

3. The single best eng contribution can be… not shipping code!

4. A 1-character change can have massive impact

Etc etc
January 12, 2026 at 2:05 PM
Playing with Google ADK this week. An amazing agentic AI framework, the best we’ve tried so far in terms of developer experience.
January 12, 2026 at 10:03 AM
"DL seems full of things that aren't grounded in theory, but are tricks/hacks just to get the damn thing to train".

So true!
Huh, that's cool. DL seems full of things that aren't grounded in theory, but are tricks/hacks just to get the damn thing to train. Is gradient clipping the same? It's been a while since I messed around with the internals of DL models.
January 12, 2026 at 8:43 AM
wow!
AI has gotten really good at theorem proving: axiommath.ai/territory/fr...

Axiom’s prover supposedly solved all 12 of 2025’s Putnam problems correctly. Source code: github.com/AxiomMath/Pu...
January 12, 2026 at 8:29 AM
Reposted by Kirill Lutcenko
New blog post: Don't fall into the anti-AI hype.

antirez.com/news/158
January 11, 2026 at 10:19 AM
Always nice to get this kind of feedback from the dev team :)
January 11, 2026 at 3:11 PM
Yes, I keep hearing about this over the last month or so, that corporations mostly use AI as an excuse for “tough decisions” rather than actually automating labor.
January 11, 2026 at 3:03 PM
Interesting how such copyright violations will be legally handled in cases where content was retrieved from a jailbroken LLM through an agent developed by someone not affiliated with the LLM vendor. Should we start worrying about this when developing agents for B2C products?
"In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim ... Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

arxiv.org/abs/2601.02671
Extracting books from production language models
Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized dat...
arxiv.org
January 10, 2026 at 7:14 PM