Jeff Sebo
@jeffsebo.bsky.social
1.7K followers 120 following 350 posts
Associate Professor of Environmental Studies, Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, and Co-Director of the Wild Animal Welfare Program, New York University. jeffsebo.net
Posts Media Videos Starter Packs
Pinned
jeffsebo.bsky.social
My next book, The Moral Circle, is available for preorder!

This book examines the moral status of insects, AI systems, and many other nonhumans.

The book is out January 28 2025, in both print and audio. 1/

wwnorton.com/books/978132...
Reposted by Jeff Sebo
mbolotnikova.bsky.social
Goodall was such an icon because she did & said things that were heresy in the scientific community. Her work on animals' capacities represented not just an abstract finding but a practical ethic that led her to advocate for veganism &vocally oppose animal experimentation
www.vox.com/future-perfe...
Jane Goodall’s most radical message was not about saving the planet
The conservationist used her stature to advocate for one of the most important, yet most unpopular, causes in the world.
www.vox.com
Reposted by Jeff Sebo
wwnorton.com
Will AI systems ever become sentient? How should we treat them if we feel uncertain? Check out @jeffsebo.bsky.social's TEDx talk on AI sentience.
youtu.be/yEfvhjujKSY
Are we even prepared for a sentient AI? | Jeff Sebo | TEDxNewEngland
YouTube video by TEDx Talks
youtu.be
jeffsebo.bsky.social
The NYU Center for Mind, Ethics, and Policy recently hosted Cass Sunstein for a public talk on a bill of rights for animals. The recording is now online — feel free to share with anyone who may have interest!

youtube.com/watch?v=P2Y16xw8sZ4&feature=youtu.be
Cass Sunstein, "A Bill of Rights for Animals"
YouTube video by NYU Center for Mind, Ethics, and Policy
youtube.com
jeffsebo.bsky.social
Ann Linder, Colin Jerolmack, and I published a letter in Science about the importance of addressing industrial animal agriculture's public health impacts when developing a response to bird flu. (You can find it below the target article, as a response.)
The consequences of letting avian influenza run rampant in US poultry
The approach proposed by a high-ranking US government official would be dangerous and unethical
www.science.org
jeffsebo.bsky.social
19/ You can find my talk on AI welfare here:
tedxnewengland.com/speakers/jef...

Hope you enjoy! For more, see:

- The Moral Circle
wwnorton.com/books/978132...

– Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...

– Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
tedxnewengland.com
jeffsebo.bsky.social
18/ If there are risks in both directions, then we should consider them both, not consider one while neglecting the other. And even if the risk of under-attribution is low now, it may increase fast. We can, and should, address current problems while preparing for future ones.
jeffsebo.bsky.social
17/ However, Suleyman also describes our work on moral consideration for near-future AI as “premature, and frankly dangerous,” implying that we should consider and mitigate over-attribution risks but not under-attribution risks at present. Here we disagree.
jeffsebo.bsky.social
16/ FWIW, I agree with Suleyman on many issues, including: (1) Over-attribution risks are more likely at present, (2) We should avoid creating sentient AI unless we can do so responsibly, and (3) We should avoid creating non-sentient AI that seems sentient.
jeffsebo.bsky.social
15/ Public attention has exploded as well. Many now experience chatbots as sentient, and experts are rightly sounding the alarm about over-attribution risks, including Microsoft AI CEO Mustafa Suleyman in his recent essay on “seemingly conscious AI.”

mustafa-suleyman.ai/seemingly-co...
We must build AI for people; not to be a person
mustafa-suleyman.ai
jeffsebo.bsky.social
14/ I recorded this talk last year. Since then, we released “Taking AI Welfare Seriously,” and Anthropic hired one of the authors as an AI welfare researcher, launched an AI welfare program, and (with Eleos AI) conducted AI welfare evals. Other actors entered the space too.
jeffsebo.bsky.social
13/ For the rest of us: We can accept that we may be the first generation to co-exist with real sentient AI. Either way, we can expect to keep making mistakes about AI sentience. Preparing now — cultivating calibrated attitudes and reactions — is important for everyone.
jeffsebo.bsky.social
12/ For companies and governments, taking AI welfare seriously means acknowledging that AI welfare is a credible issue, assessing AI systems for welfare-relevant features, and preparing policies for treating AI systems with an appropriate level of moral concern.
jeffsebo.bsky.social
11/ We use these tools in a variety of domains. We do it to address drug side effects, pandemic risks, and climate change risks. Increasingly, we do it to address animal welfare risks and AI safety risks. In the future, we can do it to address AI welfare risks as well.
jeffsebo.bsky.social
10/ Fortunately, we have tools for making high-stakes decisions with uncertain outcomes. When there is a non-negligible chance that an action or policy will cause harm, we can assess the evidence and take reasonable, proportionate steps to mitigate risk.
jeffsebo.bsky.social
9/ Yet if this analysis is correct, then AI sentience is not an issue only for sci-fi or the distant future. There is at least a non-negligible chance that AI systems with real feelings could emerge in the near future, given current evidence. What do we do with that possibility?
jeffsebo.bsky.social
8/ This situation calls for humility as well. Even if you feel confident that progress in AI will slow from here, you should allow for at least a realistic chance that it will speed up or stay the same, and that AI systems with human-like capabilities will exist by, say, 2035.
jeffsebo.bsky.social
7/ We may not know until it happens. Technology is hard to predict. In 2015, many doubted that AI systems would be able to have conversations, produce essays and music, and pass standardized tests in a range of fields within a decade. Yet here we are.
jeffsebo.bsky.social
6/ Second, we have uncertainty about the future of AI. Companies are spending billions on progress. They aim for intelligence, not sentience, but intelligence and sentience may overlap. Some think AI will slow down, others think it will speed up. Which view is right?
jeffsebo.bsky.social
5/ This situation calls for humility. We may lean one way or the other, but we should keep an open mind. Even if you feel confident that only biological beings can feel, you should allow for at least a realistic chance that sufficiently advanced artificial beings can feel, too.
jeffsebo.bsky.social
4/ We may never know for sure. The only mind that any of us can directly access is our own, and we have a lot of bias and ignorance about other minds, including a tendency to (a) over-attribute sentience to some nonhumans and (b) under-attribute it to others.
jeffsebo.bsky.social
3/ First, we have uncertainty about the nature of sentience. Some experts think that only biological, carbon-based beings can have feelings. Others think that sufficiently advanced artificial, silicon-based beings can have feelings too. Which view is right?
jeffsebo.bsky.social
2/ Based on my 2024 report with Robert Long and others, this talk makes three basic points: (1) we have deep uncertainty about the nature of sentience, (2) we have deep uncertainty about the future of AI, and (3) when in doubt, we should exercise caution.

tedxnewengland.com/speakers/jef...
tedxnewengland.com
jeffsebo.bsky.social
1/ My TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇