Arthur
arthurbmello.bsky.social
Arthur
@arthurbmello.bsky.social
Data scientist / AI Engineer interested in causality.
BJJ black belt.
Views are not my own.
https://arthurmello.ai/
Funny thing is, a lot of GenAI agents are now automating decisions that were based on correlation. No causal logic.

Feels a bit like we're putting on makeup before taking a shower.
February 14, 2026 at 9:33 AM
Still, it’s a bit of a shame. Causality was finally picking up momentum, but it feels like it’s getting drowned out before it really starts.

And it’s not like one has to replace the other. We could easily have both, if the hype doesn’t suffocate the slower, less shiny stuff first.
February 14, 2026 at 9:33 AM
Once we did, radio’s effect dropped to normal.

Two takeaways:

- Hidden confounders will mess up your model.
- Business context isn’t in your dataset. You have to ask for it.

MMM still feels like 30% modeling, 70% detective work.
January 31, 2026 at 9:34 AM
After digging in with the marketing team, we found out radio ads were mostly used right before major commercial pushes.

Events that also included discounts and influencer campaigns, and that were not tracked as the usual media investments.

We hadn’t included those in the model.
January 31, 2026 at 9:34 AM
We trust humans even though they’re wrong all the time.

Yet when LLMs show similar behavior, we suddenly become philosophers.

Maybe the real question isn’t whether LLMs “think.”

It’s why we’re so invested in insisting they don’t.

Full post:

arthurmello.ai/blog/llms-do...
Arthur Mello | AI Agents & Marketing Measurement for Startups
I build AI agents, marketing mix models (MMM), and causal attribution systems for startups. Helping founders and growth teams make better marketing decisions with AI.
arthurmello.ai
January 21, 2026 at 3:14 PM
Because "inference" is exactly what these systems do, just not the way we do it.

Humans don’t think in neat logical steps either.

We pattern-match, guess, compete between options, self-correct, and occasionally hallucinate with confidence.

Different hardware, same mess.
January 21, 2026 at 3:14 PM
Someone in between, who can prioritize use cases and connect the dots to turn scattered initiatives into coherent transformation projects.

That’s who needs to redesign work this time.

P.S.: the Google thing is still cool, though, check it out: cloud.google.com/transform/ho...
An effective AI strategy: How to build one | Google Cloud Blog
Leading organizations are establishing a clear vision, strategically prioritizing use cases, and rigorously measuring results to maximize AI's impact.
cloud.google.com
January 15, 2026 at 8:56 AM
Sounds great on paper, but hard in practice.

The real lever? A middle layer that understands both the pain on the ground and the limits of AI.

Not the CEO. Not the junior team.
January 15, 2026 at 8:56 AM
Most teams implement small tools (a chatbot here, an automation there), while leadership dreams of big transformation.

But the two don’t connect.

Google released an AI strategy framework that suggests a fix: combine top-down strategy with bottom-up ideas.
January 15, 2026 at 8:56 AM
Factories just swapped motors but kept layouts and processes the same.

Real productivity came only after everything was redesigned around the new tech.

We’re repeating the same approach with AI:
January 15, 2026 at 8:56 AM
We’re so caught up in protecting the "sanctity" of human thought that we’re missing the miracle in front of us.

Pragmatically? If it behaves like a system that thinks, it’s thinking. The rest is just semantics.

Read the full breakdown here: arthurmello.ai/blog/llms-do...
Arthur Mello | AI Agents & Marketing Measurement for Startups
I build AI agents, marketing mix models (MMM), and causal attribution systems for startups. Helping founders and growth teams make better marketing decisions with AI.
arthurmello.ai
January 10, 2026 at 1:25 AM
"You can't trust an LLM."

You shouldn't trust your coworker blindly, either.

Thinking doesn't mean being right; it means processing context to reach a conclusion. Whether it's neurons or silicon, the function is the same.

Planes don't flap their wings, but they still fly.
January 10, 2026 at 1:25 AM
Critics call AI a “Stochastic Parrot.” But humans are probabilistic too.

We forget names. We use "um" and "uh" while our brains calculate the next phrase. We guess the end of sentences.

If AI is a parrot, we are just "biological parrots" trained on 20 years of sensory data.
January 10, 2026 at 1:25 AM
Dictionary check: To think is to exercise the powers of judgment, conception, or inference.

Notice the "or."

LLMs use inference to power reasoning and judgment. By the literal definition of the word, they qualify. We just don't like the "hardware" they run on.
January 10, 2026 at 1:25 AM
The Science of Mom by Alice Callahan

Parenting advice that doesn’t insult your intelligence.
Grounded in actual research, not Instagram. Also a good reminder: most big life decisions deserve evidence, not vibes.

What about you? What stuck with you last year?
January 6, 2026 at 5:08 PM
The Housemaid by Freida McFadden

Fast, dark, addictive. The kind of thriller you read in one or two sittings. Predictable? A little. Satisfying? Completely. It's like good fast food :)
January 6, 2026 at 5:08 PM
Americanon by Jess McHugh

The story of American culture, told by looking at 13 best sellers over time. A look at the myths of the self-made man, the lone farmer, and a culture deeply skeptical of ‘experts.’.
January 6, 2026 at 5:08 PM
The Acid Watcher Diet by Jonathan Aviv

If you deal with reflux (14% of people do), read this. It’s not just a diet book. It explains how your body reacts to food, and how to avoid acidity. Unfortunately, you actually need to change your diet (apparently just reading the book is not enough).
January 6, 2026 at 5:08 PM
Steve Jobs by Walter Isaacson

A genius and an asshole. Obsessed with craft, allergic to compromise. What stuck with me wasn’t the success, but the intensity: demanding that even the inside of a computer be beautiful. That kind of integrity is rare. So is that kind of chaos.
January 6, 2026 at 5:08 PM