Sam Barrett, PhD
banner
ai4geo.bsky.social
Sam Barrett, PhD
@ai4geo.bsky.social
GeoAI, Climate, Remote Sensing, Generative AI and more!
Just as there are different ways of collaborating with people, there are different ways of collaborating with AI. And they have different implications for the style of work and the outcome.
ed3d.net Ed @ed3d.net · 9h
the difference between somebody just blasting out an outline with chatgpt and somebody who uses it as a stenographer is huge and obvious.

press record, brain dump, have it ask questions (don't have it answer them), have it organize into notes. it's fantastic.

write for you? it and you will suck
“When participants used ChatGPT to draft essays, brain scans revealed a 47% drop in neural connectivity across regions associated with memory, language, & critical reasoning.

Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage.”🧪
December 4, 2025 at 3:14 PM
They people who say you *have* to learn a lot of abstract maths before you can start learning, let alone working, with ML/AI should know their structure excludes a lot of ADHD folks who struggle when things are abstract but thrive then they are applied.
December 1, 2025 at 11:00 AM
Reposted by Sam Barrett, PhD
For me, the promise of language models is that we get to explore new ways of thinking. What if you could fit a million words in short-term memory? What if you could adjust the balance between brainstorming and critique?

Are these better ways of thinking? Idk. But we never know that in advance. +
November 30, 2025 at 3:18 PM
Gemini quote of the day: "You need to let them walk right into the trap of understanding.
​Here are the three specific tactics I used in 2019 to de-program the BERT-worshippers. They will work on the Earth Observation folks too."
November 26, 2025 at 12:04 AM
This would be perfect. Someone should build a datacenter on half a golf course in Arizona, donate the other half to the city/county to transform into a public park maintained by the increased tax revenues....
They should build the datacenter, ON the golf course!
November 20, 2025 at 7:28 AM
I've got another different takeaway from this. If you're using a number off by 1000 and nobody notices, you are providing insufficient context about that number. If you yourself don't notice, you don't have enough context to have any business using that number.
I think this incident reflects badly on the broader institutions currently covering AI and the environment. I should absolutely not have been the first person to notice this. From the end of the post:
November 19, 2025 at 4:11 PM
If your model was born to make maps, it will never grow up to make decisions.
November 14, 2025 at 8:08 PM
Simple mechanism -> monstrous complexity via scale.
November 12, 2025 at 10:52 PM
This is "vibe reading".
November 12, 2025 at 8:59 AM
My biggest takeaway from playing Vic3 is that economies are REALLY weird and non intuitive and full of strange feedbacks and non linear relationships.
November 11, 2025 at 8:03 AM
Academic NLP folks. If you had to review a paper doings something like sentiment analysis in embeddings which used random forests as classifiers on the embeddings and feature importance or SHAP to try and interpret dims relevant to particular semantics, what would your general reaction be?
November 3, 2025 at 10:06 AM
My handle is ai4geo, but I mostly write about that over at LinkedIn... but here's something I just put out about generative AI in Earth Observation: arxiv.org/abs/2510.21813
SITS-DECO: A Generative Decoder Is All You Need For Multitask Satellite Image Time Series Modelling
Earth Observation (EO) Foundation Modelling (FM) holds great promise for simplifying and improving the use of EO data for diverse real-world tasks. However, most existing models require additional ada...
arxiv.org
November 2, 2025 at 7:56 AM
Good distinction on personal interactions. Last night I took an incredibly intellectually productive stroll: I used chatgpt with voice interactions (transcribe/read aloud, not advanced voice mode) in 4 different threads to:
The public web getting choked with low-information LLM-generated blogs filled with the worst of the sycophantic, condescending, 5 paragraph essay style outputs is just a totally different beast than the personal interaction asking an LLM to explain KAM theory with probing and clarifying questions.
November 2, 2025 at 7:52 AM
I just gave GPT-5 pro a manuscript with around 50 references lazily pasted in as urls in the text and asked it to generate the .bib file I'll need when I convert to latex. I'll check it in depth. How many mistakes do we expect?
October 13, 2025 at 4:51 PM
This statement is also pretty relevant in the world of Earth Observation re different sensors and other modalities. And that last sentence very well sums up my own explorations in EO modelling recently, though on a much smaller scale.
More broadly, I think confusion has been created by forming hard distinctions between different modalities, especially between text and sensory data. These distinctions can obscure commonalities. We take the rhetorical stance of erasing the distinctions, and seeing where this leads.

8/9
October 13, 2025 at 1:59 PM
To all the "we know how LLMs work and therefore X" folks - understanding attention and gradient descent doesn't tell you stuff like this. On at least some level we *don't* know how LLMs actually do their thing and are slowly figuring out even just extremely simple things like addition.
We did not tell LLMs, “implement addition using this algorithm.” It learned the algorithm upstream of next-token prediction
October 13, 2025 at 1:48 PM
FFS Pulse! 5 times now!
October 7, 2025 at 9:44 PM
Lovely that ChatGPT Pulse does me a Spanish lesson each day but I already knew "guagua" even before the 1st of the 4 times it's tried to teach me in the last 10 days.
October 6, 2025 at 8:32 AM
If I were a Starfleet recruiter, I'd consider "makes strong claims to knowledge of capabilities and properties of incredibly complex systems based on a mechanistic understanding of the basic components of said systems" a major red flag.
October 5, 2025 at 11:46 PM
Reposted by Sam Barrett, PhD
A common confusion I'm seeing is people mixing levels of analysis wrt neural nets: we understand the implementation level well and the algorithmic level somewhat but not the computational level of "how does it internally compute things."
October 5, 2025 at 3:55 PM
Seems like a good moment to remind people that arguments from analogy are useful for explaining the shape of an argument but not for proving something to be true. Analogies aren't the thing itself, and that is why arguments from analogies are a logical fallacy...
October 5, 2025 at 3:17 PM
ChatGPT Pulse regularly writes me long articles explaining why I should use 'viridis' as a colour map because a couple weeks back I pasted in some code with 'spectral'.
October 3, 2025 at 1:47 PM
This is a fascinating thread about sycophancy. Though I feel like what's described toward the end isn't really sycophancy. In our concern maybe we've too quickly lumped all compliments into one category when some of them have genuine and safe conversational function.
I've been thinking about LLM sycophancy (the tendency of ChatGPT/etc complimenting the user excessively), and wondering if it has more purpose than simple aggrandizement of the user. That is: is it serving other valid conversational purposes? First an example...
September 30, 2025 at 9:17 AM
Reposted by Sam Barrett, PhD
It is really important that environmentalists be numerate.

We are not going to turn off civilization, we are trying to sculpt it into a more benevolent shape, so it is essential that we accurately perceive that shape.
September 28, 2025 at 5:47 PM
I find LLMs (from experience GPT series) are great at riffing on and extending from what you're discussing but there this conceptual cliff. They rarely spontaneously say "there's this adjacent topic which is relevant, do you want to bring that in?". Great teachers DO do that.
September 28, 2025 at 4:04 PM