Chris
banner
christopherlopes.bsky.social
Chris
@christopherlopes.bsky.social
21 followers 38 following 160 posts
Doggy 🐶 Hiking 🥾 Health 🥕👨💻🏋️ Life Learner 📝 Podcasts 🎙️ Software Developer 💼 Spirituality ⛪ Survivor 🏝️ Reading 📚: christopherlopes.com
Posts Media Videos Starter Packs
Gall's Law: "A complex system that works is invariably found to have evolved from a simple system that worked"

principles-wiki.net/principles:g...
“You have 18 months” The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.

www.theargumentmag.com/p/you-have-1...

My concern is that this will not be true equally.

short-thoughts.christopherlopes.com/blog/2025/re...
For many years I've desired to improve my vocabulary. I've implemented various techniques and routines. Alas, none has taken hold. My latest attempt has been to inform AI of my objective and to take this into consideration when generating responses. So far I'm encouraged by the resulting behavior.
Martin Fowler shares this thought from Rebecca Parsons:

"...hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful."

martinfowler.com/articles/202...
Why is it that we craft our work so AI can understand but we do not do so for people?

It's like we have an expectation people should put forth the effort to understand, but AI has hard limitations, so we have to take extra steps for AI but can be lazy for others.

www.heavybit.com/library/podc...
O11ycast | Ep. #85, AI/LLM in Software Teams: What’s Working and What’s Next with Dr. Cat Hicks | Heavybit
In episode 85 of o11ycast, Dr. Cat Hicks unpacks AI’s impact on software teams from a psychological and social-science perspective.
www.heavybit.com
"But what they cannot do is maintain clear mental models. LLMs get endlessly confused: they assume the code they wrote actually works... This is exactly the opposite of what I am looking for."

Insight on the difference of experience being reported

zed.dev/blog/why-llm...
Why LLMs Can't Really Build Software - Zed Blog
From the Zed Blog: Writing code is only one part of effective software engineering.
zed.dev
Growing up it became inculcated within me that education is about not knowing or understanding something. But what I try to remember is that education is about learning.

One mindset produces fear and intimidation. The other exploration and insight.
I feel it's a shame Python won the day, like JS. And then there are languages like Elixir which are a joy, from my little experience, which don't receive the attention they should. Perhaps it will be less relevant in the future, with AI writing much of the code. But it still feels like a loss to me.
When using the new study mode in ChatGPT and asking about typing gotchas in Python its reply was:

"But for people used to strong, safe systems (like you), it can feel like trying to write a novel with crayons."

That's one for the quote wall!
Trying out Google's Opal opal.withgoogle.com/landing/

Even with AI it's still missing the mark on low code experiences. It seem inevitable low code will be figured out, but that day is not today.
Welcome
opal.withgoogle.com
It took 45 minutes but I eventually was able to reason the AI to conclude it was using a faulty line of reasoning.

On one hand it was a waste of my time, on the other hand it gave me understanding on how it was reasoning, giving me insights on how to prompt AIs.
Sometimes I feel like this when trying to correct an AI response. I just can't stop until I convince the AI it is mistaken.

xkcd.com/386/
Duty Calls
xkcd.com
Regarding AI agents: "They are basically a new programming control structure which can take English and point you to what to run next" Seth Juarez
Maybe their generous free tier is too much. Which then makes me wonder why. People trying it out? Software tools have been free, and so we expect the same? It's a challenge to get the org you work at to pay for it?
I'm not having much success with the Gemini CLI. Using an API key I initially received too many requests errors. Now I'm receiving "The model is overloaded" errors. When I use Gemini within IDE it takes a long time for it to reply.
I now question if I need to bookmark webpages any longer. I spent a few minutes trying to find an article I recently read. No luck. I vaguely described it to o3. It thought for 1:34 and replied with the precise article.
Knowing how to breakdown tasks and define them for the agent seems critical. This is the skill I've been learning.
Using coding AI agents is a skill. Agents in a loop can solve many problems where sometimes I find myself working with AI and being part of the loop, when an agent can have completed the task itself in a loop. Other times the agent needs human correction.
AI increases the set of people able to create software tools, but I fear most will not be be maintained. Previously it required a higher level of commitment to put something out there, thus weeding out those not committed to maintaining the tool. Now you can create without much commitment.
All of these instances of lawyers including AI generated mistakes makes me wonder about the thoroughness of their other work, a field which I thought was all about details, at least it seems that way based on whenever legal documents come my way.
Reports of AI usage likes this remind me of my experience learning math in school. I had the appearance of excelling but I didn't understand concepts, I could only apply procedures in context. Once I stepped outside of that artificial world I was lost.

albertofortin.com/writing/codi...
After months of coding with LLMs, I'm going back to using my brain • albertofortin.com
I've been building MVPs and SaaS products for 15 years. Let's work together on your next project.
albertofortin.com
Because this change seemed subtle and simple but yet had far reaching consequential implications is a good example of a type of AI issue to be concerned about, but they don't seem to receive much attention. It is this lack of attention which is what raises the level of these types of issues.
"But there was also a lesson about the raw power of personality. Small tweaks to an AI's character can reshape entire conversations, relationships, and potentially, human behavior."

www.oneusefulthing.org/p/personalit...
Personality and Persuasion
Learning from Sycophants
www.oneusefulthing.org