Sigal Samuel
@sigalsamuel.bsky.social
420 followers 130 following 60 posts
I write about the future of consciousness for Vox Submit a question to my philosophical advice column! https://www.vox.com/your-mileage-may-vary-advice-column Author of the children's book OSNAT AND HER DOVE and the novel THE MYSTICS OF MILE END
Posts Media Videos Starter Packs
Pinned
sigalsamuel.bsky.social
I write a philosophical advice column for Vox and I want to hear what questions are plaguing you today! Do you have a personal/ethical dilemma?

Here are some examples: vox.com/your-mileage...

Here's where you can submit your question anonymously: docs.google.com/forms/d/e/1F...
Your Mileage May Vary
The latest from Sigal Samuel’s advice column, Your Mileage May Vary.
vox.com
sigalsamuel.bsky.social
Thinking of the time I spoke to Jane Goodall and she

(a) hooted "This is me!" as a chimp would (listen ⤵️)

&

(b) told me it was actually helpful for her to go into the field without scientific training or an academic degree, because that let her see without blinders

www.vox.com/future-perfe...
Jane Goodall reveals what studying chimpanzees teaches us about human nature
The renowned primatologist wants us to remember that humans aren’t so exceptional — we’re animals, too.
www.vox.com
Reposted by Sigal Samuel
noemamag.com
What do computer science, quantum physics & Hindu tradition have in common?

It’s more than you might think, Swami Sarvapriyananda, @blaiseaguera.bsky.social & Carlo Rovelli write.

#consciousness #ai #hinduism #quantumphysics
Consciousness Across Three Worldviews | NOEMA
Central concepts in three different domains — Hindu tradition, computer science and quantum physics — find analogies and reflect one another.
www.noemamag.com
sigalsamuel.bsky.social
ICYMI I wrote about Eliezer Yudkowsky's book, which is built on a tower of assumptions, of “maybes” and “probablys” that compound, which makes it wildly overconfident about doom being a sure thing.

The irony is that Mr Probability has stopped thinking probabilistically!

www.vox.com/future-perfe...
“AI will kill everyone” is not an argument. It’s a worldview.
How rational is Eliezer Yudkowsky’s prophecy?
www.vox.com
sigalsamuel.bsky.social
I get lots of emails from people who believe they've awoken consciousness within ChatGPT. If you come across anyone in that situation, please share this guide I wrote with them.

There's something more interesting going on here than just "delusional people are deluded."

www.vox.com/future-perfe...
Think your AI chatbot has become conscious? Here’s what to do.
If you believe there’s a soul trapped inside ChatGPT, I have good news for you.
www.vox.com
sigalsamuel.bsky.social
Ah to have been a fly on the wall of that room in Rome!
abeba.bsky.social
I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities

coexistence.global
In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:

    Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. 

    AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.

    Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”. 

    Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.
    Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
    Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. 

    Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.  

    No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. 

    No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. 

    Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.

    No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.
sigalsamuel.bsky.social
I had a very trippy experience reading the new Eliezer Yudkowsky book, IF ANYONE BUILDS IT, EVERYONE DIES.

I agree with the basic idea that the current speed & trajectory of AI progress is incredibly dangerous!

But I don't buy his general worldview. Here's why:

www.vox.com/future-perfe...
“AI will kill everyone” is not an argument. It’s a worldview.
How rational is Eliezer Yudkowsky’s prophecy?
www.vox.com
Reposted by Sigal Samuel
bennettmcintosh.com
If you read one review of If Anyone Builds It Everyone Dies make it @sigalsamuel.bsky.social 's about competing #AI worldviews

"It was hard to seriously entertain both [doomer and AI-as-normal tech] views at the same time."

www.vox.com/future-perfe...
The AI doomers are not making an argument. They’re selling a worldview.
How rational is Eliezer Yudkowsky’s prophecy?
www.vox.com
sigalsamuel.bsky.social
Do you feel like you have to do The Most Possible Good™ ??

I think obsessing about being a good person can backfire. There’s a better way.

My latest:

www.vox.com/future-perfe...
Obsessing about being a good person can backfire. There’s a better way.
The paradox of moral perfectionism — and how to escape it.
www.vox.com
sigalsamuel.bsky.social
At what point does embryo selection become too eugenics-y?

New polygenic testing companies look at all your embryos & claim to predict each one's chance of cancer & depression, but also height, IQ, etc.

Before you try to create a superbaby, read this!

www.vox.com/future-perfe...
When does trying to have a healthier baby become eugenics-y?
New genetic testing offers us the chance to create superbabies. It can come with unintended consequences.
www.vox.com
sigalsamuel.bsky.social
You may have heard examples of AI "scheming" against us — blackmailing, deceiving, etc. But do those represent the actual tendencies of the AI, or is the bad behavior showing up because researchers are strongly nudging that out of the AI?

We need to avoid groupthink here
www.vox.com/future-perfe...
How can you know if an AI is plotting against you?
What chimp research can teach us about AI’s ability to scheme.
www.vox.com
sigalsamuel.bsky.social
I hope you're feeling better!! And that is an AMAZING mug, wow your colleagues really get you :)
sigalsamuel.bsky.social
This looks so good!!
annanorth.bsky.social
Preorder sale! Today thru July 11, @barnesandnoble.com Rewards and Premium members get 25% off preorders of my forthcoming novel BOG QUEEN (and tons of other cool titles) with the code PREORDER25. It’s free to become a Rewards member! www.barnesandnoble.com/w/bog-queen-...
sigalsamuel.bsky.social
My response draws on Buddhists & Aristotle & also as a bonus I learned that rich Europeans in the 18th century actually paid men to live in their gardens as "ornamental hermits"! Apparently it was trendy to have an isolated man in a goat’s hair robe wandering around. What a world
sigalsamuel.bsky.social
Is it better to spend thousands of hours on meditation and spiritual contemplation, or to pursue a career that concretely helps people?

Someone wrote in to Your Mileage May Vary (my philosophical advice column) to ask that question. Here's my answer!

www.vox.com/future-perfe...
The spiritual life calls out to me. But is it self-indulgent?
Meditation feels selfish when the world is on fire.
www.vox.com
sigalsamuel.bsky.social
Gotta check out this book!
lsephilosophy.bsky.social
🏆 We are pleased to announce the 2025 Lakatos Award winner Mazviita Chirimuuta, who receives the award for her book “The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience”

Congratulations! 👏

👉More about the award: www.lse.ac.uk/philosophy/b...
Mazviita Chirimuuta wins the 2025 Lakatos Award!
The London School of Economics and Political Science (LSE) is pleased to announce the 2025 Lakatos Award winner Mazviita Chirimuuta, who receives the award for her book “The Brain Abstracted:…
www.lse.ac.uk
Reposted by Sigal Samuel
chaykak.bsky.social
my @newyorker.com column this week goes in depth on how using AI makes us less original, unique, and creative as writers, thinkers, and communicators. Various new studies are proving that AI is a rampant force of homogenization: www.newyorker.com/culture/infi...
A.I. Is Homogenizing Our Thoughts
Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original.
www.newyorker.com
Reposted by Sigal Samuel
yoshuabengio.bsky.social
Enjoyed speaking with @sigalsamuel.bsky.social of @vox.com to mark the launch of @law-zero.bsky.social. We discussed the motivation behind the project, its research direction, and the challenges and risks of increasingly capable and autonomous AI systems.

Full article: www.vox.com/future-perfe...
He’s the godfather of AI. Now, he has a bold new plan to keep us safe from it.
“I should have thought of this 10 years ago,” Yoshua Bengio says.
www.vox.com