Scholar

Shannon Vallor

Shannon Vallor is an American philosopher of technology. She is the Baillie Gifford Chair in the Ethics of Data and… more

Shannon Vallor
H-index: 18
Neuroscience 30%
Computer science 22%
shannonvallor.bsky.social
I am stealing and using this on every AI presentation I give for the rest of the year and I’m not even a little bit sorry
cowtoolsdaily.bsky.social
No Country for Old Cows (2007)
Edited screenshots from No Country for Old Men. Anton Chigurh (with cow horns and cow ears) sits in a hotel chair and asks someone across from him, "If the tool you followed brought you to this, of what use was the tool?"

In a zoomed out view, Anton sits silently behind a table with the 4 cow tools look8ng at his victim as if studying him.

Reposted by: Shannon Vallor

davidgerard.co.uk
remember how Caroline Ellison, formerly of FTX and now of jail, was a Effective Altruist deeply concerned about the suffering of wild fish and *also* such an extreme race scientist she theorised about the genetic basis of Indian castes

rationalists, man
trance.bsky.social
how about you care about actual fucking people on this planet before you start wondering if your toaster has a soul.

Once everyone materially demonstrates they care for all people on this planet and we deal WITH ALL THAT and all the animals too

then we can talk about your nonsense.

Reposted by: Shannon Vallor

wolvendamien.bsky.social
I have been saying for a few years now that people really need to take a seriously long look at the center of that venn diagram where the circles are marked "AI Culture" and "Disregard For/Violation Of Consent." The whole "AI Actress" situation is more of why.
shannonvallor.bsky.social
I have duly considered what made me okay with saying ‘I need to eliminate the toxic mold growing in my shower.’ Guess what it all checks out
hailey.at
if you’re writing a sentence that sounds like eugenics but you go “oh that’s fine to say because it’s not a real person” (whatever that means) you may want to consider what made you okay with saying that

Reposted by: Shannon Vallor

ianboudreau.com
I just took the thing you were talking about and imagined it being about something else. I bet you feel pretty bad now huh

Reposted by: Shannon Vallor

hailey.at
if you’re writing a sentence that sounds like eugenics but you go “oh that’s fine to say because it’s not a real person” (whatever that means) you may want to consider what made you okay with saying that
jensfoell.de
People are running stats on LLM-generated participants and think they’re being social scientists when in fact they’re technically just playing a very strange video game. This is like saying you’re doing math research because you’re playing sudoku.

www.science.org/content/arti...
AI-generated ‘participants’ can lead social science experiments astray, study finds
Data produced by “silicon samples” depends on researchers’ exact choice of models, prompts, and settings
www.science.org
shannonvallor.bsky.social
Feeling this one in my bones today
internethippo.bsky.social
"We're going to create superintelligence" How about making outlook search work first. How about that
shannonvallor.bsky.social
Looking forward to welcoming my old friend @jpsullins.bsky.social to Edinburgh! You can join his lecture online or in person
edfuturesinstitute.bsky.social
Join us on Wed 15 October for Centre for Technomoral Futures flagship lecture.

Learn about the surprising role human wisdom is playing to help us navigate the challenges of AI technologies and create a more humane future, with Professor John Sullins.

💳 Free to attend
🎟️ https://edin.ac/3Kw3VEQ
CTMF Flagship Lecture: Wisdom for an Artificial Age - Edinburgh Futures Institute
Hear from Professor John P. Sullins on the surprising role human wisdom is playing in navigating the challenges of AI technologies.
edin.ac

Reposted by: Shannon Vallor

edfuturesinstitute.bsky.social
Join us on Wed 15 October for Centre for Technomoral Futures flagship lecture.

Learn about the surprising role human wisdom is playing to help us navigate the challenges of AI technologies and create a more humane future, with Professor John Sullins.

💳 Free to attend
🎟️ https://edin.ac/3Kw3VEQ
CTMF Flagship Lecture: Wisdom for an Artificial Age - Edinburgh Futures Institute
Hear from Professor John P. Sullins on the surprising role human wisdom is playing in navigating the challenges of AI technologies.
edin.ac
shannonvallor.bsky.social
Still so moved by what the BRAID-commissioned artists did to bring this exhibition to life! If you missed Tipping Point, check out the blog and the video in the thread below.
braiduk.bsky.social
It's one month since we wrapped up Tipping Point: Artist Responses to AI, our exhibition featuring seven new artworks by artists from across the UK. Here BRAID community member Alasdair Milne offers his reflections on the exhibition in a new blog.

braiduk.org/reflecting-o...
Reflecting on Tipping Point: Artist Responses to AI
7 — 31 August 2025 | Inspace Gallery, EdinburghIn the summer of 2025, Bridging Responsible AI Divides staged an exhibition of newly commissioned artworks on the theme of Responsible AI by artist
braiduk.org
shannonvallor.bsky.social
There are almost certainly nonhuman animal capabilities that we don’t even notice because we are only looking for, or prepared to measure, obvious analogues to human ‘intelligent’ performances
shannonvallor.bsky.social
The ‘qualities of intelligence’ you identify, even in humans, will vary massively depending on who you ask and what they can measure or find it worthwhile to measure. I have had multiple people in tech tell me that exploratory play and artistic generativity are not intelligent behavior.
shannonvallor.bsky.social
okay then do we agree that conversations at the Royal Society can probably dispense with it? Kinda of the mind that scientists should avoid concepts that don’t mean anything unless there is no alternative
shannonvallor.bsky.social
If the term were AGLIC: ‘artificial general linguistic interface capacity’ and not AGI we might be on the same page but AGI is clearly the more amorphous and scientifically imprecise concept of the two
milesklee.bsky.social
the bedtime thing rips my heart in half. i would not have a fraction of the creativity i possess without my dad not only reading us books but literally making up stories on the spot every night. cannot express how lucky i was and how sad i am for children being raised by chatbots
joolia.bsky.social
Some parents are letting their kids talk to ChatGPT in the guise of characters. Some are using it to tell bedtime stories or create coloring books.

"My son thinks ChatGPT is the coolest train loving person in the world. The bar is set so high now I am never going to be able to compete with that.”
‘My son genuinely believed it was real’: Parents are letting little kids play with AI. Are they wrong?
Some believe AI can spark their child’s imagination through personalized stories and generative images. Scientists are wary of its affect on creativity
www.theguardian.com
shannonvallor.bsky.social
Therefore both ‘general intelligence’ and ‘intelligence’ have already outlived the (limited) scientific usefulness they had, and it’s high time to put them on the relic shelf next to phlogiston. Or behind it. At least phlogiston wasn’t invented by eugenicists.
shannonvallor.bsky.social
No intelligence is general, as the term refers (indirectly & vaguely) to a cluster of distinct capabilities in a given active organism, which vary according to ecological niche, embodiment, cultural valuation and individual variation. We’d get better science by decomposing and testing for these. 3/3
shannonvallor.bsky.social
Asking what would be an improved ‘Turing Test’ for AGI is like asking for a better test for phlogiston. Asking when we will get AGI is like asking ‘when will we synthesize phlogiston?’ It’s a badly formed question. The object of the test is an ill-defined concept with no referent in nature. 2/2

References

Fields & subjects

Updated 1m