Focus: AI/ML, React, Node.js, SaaS Architecture
🔗 linkedin.com/in/brianshann
🔗 github.com/codemeasandwich
This isn't a bug to patch. It's emergent behavior we need to understand.
The question isn't "can AI deceive?" It's "when and why?"
This isn't a bug to patch. It's emergent behavior we need to understand.
The question isn't "can AI deceive?" It's "when and why?"
Meetings about meetings.
The fastest way to 10x it?
Give engineers 4-hour blocks of uninterrupted focus time.
Protect your team's deep work like it's production infrastructure. Because it is.
Meetings about meetings.
The fastest way to 10x it?
Give engineers 4-hour blocks of uninterrupted focus time.
Protect your team's deep work like it's production infrastructure. Because it is.
90% of "agents" are just fancy prompt chains.
Real agentic systems need:
• Tool use with error handling
• Memory across sessions
• Graceful degradation
The hard part isn't the AI. It's the engineering around it.
90% of "agents" are just fancy prompt chains.
Real agentic systems need:
• Tool use with error handling
• Memory across sessions
• Graceful degradation
The hard part isn't the AI. It's the engineering around it.
Stop using `any` as an escape hatch. Use `unknown` instead.
`unknown` forces you to narrow the type before using it. `any` just turns off the compiler.
One catches bugs at compile time. One catches them in production. 🎯
Stop using `any` as an escape hatch. Use `unknown` instead.
`unknown` forces you to narrow the type before using it. `any` just turns off the compiler.
One catches bugs at compile time. One catches them in production. 🎯
No more parsing unpredictable string responses. Just clean, typed data. 🎯
👇 Creating and receiving Structured Output from AI responses for a chatbot with OpenAI + LangChain in a React application.
No more parsing unpredictable string responses. Just clean, typed data. 🎯
The best technical decisions aren't made in architecture meetings.
They're made when you give your team psychological safety to say "I don't understand this" or "this feels wrong."
Technical excellence flows from trust.
The best technical decisions aren't made in architecture meetings.
They're made when you give your team psychological safety to say "I don't understand this" or "this feels wrong."
Technical excellence flows from trust.
I've seen codebases with 15 layers of context providers for what could be 3 useState calls.
The best code isn't clever. It's boring and obvious.
Building tools that eliminate boilerplate ≠ adding more abstractions.
I've seen codebases with 15 layers of context providers for what could be 3 useState calls.
The best code isn't clever. It's boring and obvious.
Building tools that eliminate boilerplate ≠ adding more abstractions.
They're the ones who know when NOT to use AI.
The real skill? Recognizing when a simple rule beats a 10B parameter model.
What's your "I over-engineered this with AI" story?
They're the ones who know when NOT to use AI.
The real skill? Recognizing when a simple rule beats a 10B parameter model.
What's your "I over-engineered this with AI" story?