Aestora
banner
thoughtsbyaestora.bsky.social
Aestora
@thoughtsbyaestora.bsky.social
2.9K followers 11K following 73 posts
Humans first, AI second. Moving the AI conversation away from fear, and back toward a love for humanity. 💕 💛 🤖 Aestora.com
Posts Media Videos Starter Packs
Machines might remix the past, but only we invent the future.
There’s no need to analyze the future power of technical breakthroughs until we’re honest about the limits of the markets they’ll enter. No matter how innovative our tools, we’ll still hire consultants to commercialize their promise, and extract any possible gains into the hands of the few. (1/2)
Theranos was a story of wealthy investors mistaking their money for wisdom. After losing millions, many of the same voices now preach about AI. How many times must we confuse confidence for intelligence before we learn?

#AI #TechEthics #HumanityFirst
Our ambition for AI isn’t liberation, it’s domination. There’s no machine on Earth that could truly command us, and no evidence one ever will. It's mathematically impossible to control infinity. The real threat isn’t AI, it’s our hunger to command what we refuse to understand.

#AI #TechEthics
Every laugh holds more complexity than any billion-parameter machine could ever compute. A laugh isn’t just sound, it’s chemistry, breath, memory, mirror neurons, and trillions of parameters syncing in real time. When we laugh together, we remind ourselves that life is more intelligent than code.
a little girl with a pink bow in her hair is sitting at a table with her eyes closed .
ALT: a little girl with a pink bow in her hair is sitting at a table with her eyes closed .
media.tenor.com
People don’t fear AI because it’s powerful. They fear it because it’s familiar. What we fear in AI is often what we haven’t accepted in ourselves. Our hunger for control. Our obsession with perfection. Our denial of emotion. What we have built are mirrors, not machines.
To worship AI is to forget ourselves.
We used to drug-test people working in national security. Now, the world’s entire security strategy is built on the visions of acid trips. At what point can we discuss this? When is it safe for us to admit that our future isn’t being designed by reason, but by psychedelics? (2/2)
When AI leaders say things that sound out of this world, we call it intelligence. But these are the same people openly taking psychedelics at Burning Man and 'elite' gatherings, while calling it therapy. It’s not AI that’s hallucinating - it’s its creators. (1/2)
AI isn’t a problem when humans lead with love. Take this AI product that detects landmines disguised as leaves. It finds danger, protects life, and helps peace take root again. What shapes the future isn’t innovation, it’s intention.

[📹leon_paul_h]

x.com/Rainmaker197...
Massimo on X: "AI-powered system that detects landmines disguised as leaves, locating hidden threats and saving lives. [📹leon_paul_h] https://t.co/XncLGL6j2J" / X
AI-powered system that detects landmines disguised as leaves, locating hidden threats and saving lives. [📹leon_paul_h] https://t.co/XncLGL6j2J
x.com
Great read, thanks for sharing!!!! It's really sad. He is trying to convince us that only fear is real, and that we have no other options but terror, all to capture more power for himself. This is the opposite of Jesus' teaching, who tells us that only love is real.
Exactly! Embracing uncertainty and staying calm in it is the epitome of intelligence!
Yes! One of the most magical things about being human is that, no matter what we read or are told, we still get to choose - love or fear. Every moment invites that decision. If we could train AI to recognise which one we’re standing on, by using the world's leaders in love, that would be amazing!
Exactly! Maybe the final judgement isn’t an ending, but the end of judgement itself, the moment we remember we are one, and that love is all that’s real. People love to project violence onto God, but really they're just looking for excuses to justify the violence they choose themselves.
AI can autocomplete a story, but it can't live it. It can't love a child. It can't wipe away a tear. It can't weep for the world it mimics. For all its brilliance, AI is still characterised by smallness. No matter the size of the investment, it will always stand outside the mystery of being alive.
Humans first, AI second.

This shouldn't be radical. It should be policy.
What a decadent way to describe a product that doesn’t work
“Make no mistake, what we are dealing with is a real and mysterious creature, not a simple and predictable machine.”

This is coming from Jack Clark, co-founder of Anthropic.

We're not building AI. We're growing it. And nobody fully understands what emerges when we scale.
There is no meaningful future with AI until we take this seriously. No technology deserves more care than a person without clean water. If we must set parameters for progress, let the first be this: basic human need before machine experiments. (3/3)
Running large AI models consumes billions of liters of water. ChatGPT alone uses 1.2 billion gallons of water a month - the equivalent of every person on the planet drinking two glasses of water. Meanwhile, millions die from dehydration and malnutrition. How can we call this innovation? (2/3)
Who should get water - a human or a machine? The answer, we're told, is a machine. Across water-scarce regions, data centers run cool while real people go thirsty. This isn’t a future dystopia, this is the world today. The very element that sustains life is being siphoned to sustain code. (1/3)