Matt Blake
banner
mattblake.uk
Matt Blake
@mattblake.uk
210 followers 630 following 330 posts
No code app development tutoring at 👉 https://www.planetnocode.com Habitual side project starter 🪴 X @mattblake_uk
Posts Media Videos Starter Packs
The truth about that “AI is damaging your brain” headline — why the MIT study doesn’t prove what the media claims, explained by psychologist & researcher Devon Price. #AI #ChatGPT #TechTok #ScienceMyths #BrainFacts
What if we literally can't tell the difference between genius AI and broken AI? 🤯 Sam Altman's GPT-7 president speculation got me thinking about something nobody's talking about...
#AI #GPT5 #OpenAI #SamAltman
Tired of ChatGPT removing your favourite voice? 😤 Here's how to build your OWN AI voice assistant with complete control! 🎯

Vapi lets you customise EVERYTHING

#ChatGPT #AIVoice #VoiceAI #TechTips
What if ChatGPT isn't just mimicking intelligence... what if it's showing us how WE work? 🤯 Deep dive into how large language models actually function and why it might reveal something fundamental about human consciousness.
#ChatGPT #AI #ArtificialIntelligence #LLM
Geoffrey Hinton just changed how I think about AI "hallucinations."

They're actually confabulations - the same thing humans do when we construct false memories we genuinely believe. Maybe we're not as different from AI as we think.

#AI #ChatGPT #GeoffreyHinton #Psychology #Memory #Confabulation
Are AI companions helping or hurting our ability to form real relationships? 🤖💔
While AI offers judgment-free advice and endless patience, it can't replicate the vulnerable, messy reality of human bonding. Do real connections require navigating conflict, rejection, and genuine vulnerability?
The politeness paradox: being respectful to AI might make us more respectful to humans, or it might make us less empathetic to both.
The future of support might not be human versus AI. It might be human plus AI.
#AICoaching #FutureOfWork #HumanConnection
For millions without access to human coaching, this could mean 24/7 affordable support.
AI may not be ready to replace deep therapeutic relationships yet.

But the question isn't whether it will happen. It's how quickly we adapt to working alongside these tools rather than competing with them.
This points to a fascinating division of labour emerging.

AI handling structured, goal-focused support that follows established frameworks.
Humans managing the complex, adaptive work requiring cultural sensitivity and emotional nuance.
The research revealed something unexpected about our relationship with AI. We don't need to build rapport with machines like we do with humans. What matters most is whether we believe the technology actually works.
Students felt psychologically safe with AI. They shared personal information without fear of judgement.

But here's where it gets interesting.

AI only worked for narrow targets. It couldn't improve broader measures like resilience or overall wellbeing, which human coaches influenced significantly.
AI coaches performed as well as humans in trials. But there's a crucial catch.

New research reviewed 16 studies and found something fascinating about AI coaching.
In controlled trials with university students, AI coaches matched human performance for hitting specific, well-defined goals.
Early research on AI Coaching suggests we don't need to build rapport with AI like we do with humans. We just need to believe it works.
Geoffrey Hinton doesn't call them AI "hallucinations."

He calls them confabulations.

The same false memories humans create to fill gaps in knowledge - plausible content we genuinely believe is true.

What if AI "hallucinations" are actually proof AI thinks more like us than we want to admit?
The Tamagotchi effect reveals why humans naturally bond with AI companions - it's the same ancient psychology that makes us love pets and feel connected to nature.

#TamagotchiEffect #AICompanions #EvolutionaryPsychology #HumanAttachment #DigitalRelationships
Maybe the real breakthrough isn't eliminating AI confabulations.

Maybe it's teaching AI to catch itself in the act, just like we do.

#ArtificialIntelligence #CognitiveScience #MachineLearning
It's that we're better at recognising when we're making things up.

But here's the uncomfortable question: if AI is developing the same cognitive patterns that make us human, how long will our advantage of self-awareness last?
We construct them when needed, influenced by everything we've learned since, filling gaps with what feels most plausible in the moment.

Both humans and AI construct rather than retrieve information.

The difference isn't that we're more intelligent.
Yet the overall gist was perfectly accurate: there was a cover-up happening.

He was confabulating plausible memories based on what seemed right to him.

That's exactly what AI does when it "hallucinates."

We don't store memories like computer files either.
During Watergate, John Dean testified about Oval Office meetings with incredible confidence and detail.

He was genuinely trying to tell the truth, but got massive amounts wrong - meetings that never happened, wrong people saying wrong things.
In psychology, confabulations are false memories that people genuinely believe to be true - not lies, but plausible content our brains create to fill gaps, with zero awareness the details are wrong.

Sound familiar?
What if AI "hallucinations" aren't bugs in the system, but proof that AI thinks more like humans than we're comfortable admitting?

Geoffrey Hinton, the deep learning pioneer, doesn't even call them hallucinations.

He calls them confabulations.
NYU philosopher Jeff Sebo thinks AI politeness builds good habits. LSE researchers worry it's teaching us to treat people like machines. Both could be right.
Scientists just created an AI that can predict human decisions across hundreds of psychology experiments.

Centaur was trained on 10 million choices from 60,000 people and can now simulate human behaviour in gambling, memory tasks, problem-solving - even in experiments it's never seen before.