- Exploring software craftsmanship with AI @ https://pairprog.io
- Writing about it @ https://blog.pairprog.io
It's the same between fast & flashy 3D cartoons vs old school 2D animation
Old school is way easier on the kid's mind and gives them time to adapt without being overloaded
It's the same between fast & flashy 3D cartoons vs old school 2D animation
Old school is way easier on the kid's mind and gives them time to adapt without being overloaded
- Tablets are a no-go
- Switch games make him too twitchy
- Things are going much smoother with my old GBA. So now investing in retro gaming
- Tablets are a no-go
- Switch games make him too twitchy
- Things are going much smoother with my old GBA. So now investing in retro gaming
American or not
American or not
They would have asked the LLM the right questions & steered the RAG project to surface the relevant information for decision making
This is the usual paradox of trying to replace instead of enhance
They would have asked the LLM the right questions & steered the RAG project to surface the relevant information for decision making
This is the usual paradox of trying to replace instead of enhance
2. Faster animation but pause between drops
3. When landing, put them in kneeling position and add aome impact on the landing
4. When all 4 have landed, strike a pose for a second before starting play
2. Faster animation but pause between drops
3. When landing, put them in kneeling position and add aome impact on the landing
4. When all 4 have landed, strike a pose for a second before starting play
I'd also note that it's hard having a rational and nuanced discussion about this with most people
I'd also note that it's hard having a rational and nuanced discussion about this with most people
What strikes me the most is the amount of work done to reduce TS inference costs and prevent infinite type instantiation errors
Would love a write-up focused on all the tips and tricks to do so on such a complex library to type
What strikes me the most is the amount of work done to reduce TS inference costs and prevent infinite type instantiation errors
Would love a write-up focused on all the tips and tricks to do so on such a complex library to type
But you'd have to add your own notification pattern on top, making it non compatible with a base A2A client/UI
Which kind of breaks the advantages of using an industry standard
But you'd have to add your own notification pattern on top, making it non compatible with a base A2A client/UI
Which kind of breaks the advantages of using an industry standard
I totally get that asynchronous task handling is the expected behavior
But I don't get why we can't have the best of both worlds
If I built a good A2A endpoint, I'd love to be able to plug a frontend with streaming to it
I totally get that asynchronous task handling is the expected behavior
But I don't get why we can't have the best of both worlds
If I built a good A2A endpoint, I'd love to be able to plug a frontend with streaming to it
If you want to talk with an agent, you can't stream the response without some serious workarounds
Would have loved some Assistant API endpoints compatibility
If you want to talk with an agent, you can't stream the response without some serious workarounds
Would have loved some Assistant API endpoints compatibility
I've built AI based tools dedicated to understanding such codebases and extracting business rules
I would never have dared getting into those projects without LLMs
I've built AI based tools dedicated to understanding such codebases and extracting business rules
I would never have dared getting into those projects without LLMs
I let them use AI however they want but send them straight to a failing test without context
The catch being you can't solve the issue without going through the project's context, and LLMs love going straight to code
I let them use AI however they want but send them straight to a failing test without context
The catch being you can't solve the issue without going through the project's context, and LLMs love going straight to code
I actually value ML news being more downstream focused here. As I work on AI engineering and product, I still get the big news without the constant FOMO I experience on X
I actually value ML news being more downstream focused here. As I work on AI engineering and product, I still get the big news without the constant FOMO I experience on X
As a recruiter I don't need button pushers, I need engineers
As a recruiter I don't need button pushers, I need engineers
LLMs now do a lot of the executing and these companies don't know how to train engineers, which mean they see interns as useless
LLMs now do a lot of the executing and these companies don't know how to train engineers, which mean they see interns as useless