Eryk Salvaggio
@eryk.bsky.social
12K followers 4.6K following 2.9K posts
Situationist Cybernetics. Gates Scholar researching AI & the Humanities at the University of Cambridge. Tech Policy Press Writing Fellow. Researcher, AI Pedagogies, metaLab (at) Harvard University. Aim to be kind. cyberneticforests.com
Posts Media Videos Starter Packs
eryk.bsky.social
I’ve just learned that John Searle, whose Chinese Room thought experiment is often used to challenge ideas of “understanding” in LLMs, died at age 93 on Sunday. www.theguardian.com/world/2025/o...
John Searle obituary
American philosopher whose Chinese Room thought experiment rebuts the idea that computers can think as humans do
www.theguardian.com
eryk.bsky.social
Sorry, what “position” are you referring to here?
Reposted by Eryk Salvaggio
eryk.bsky.social
The "you don't understand how (AI/LLMs/Diffusion Models/NNs) work" posture in any debate is often just "we don't agree on how to interpret what (AI/LLMs/Diffusion Models/NNs) are doing"
eryk.bsky.social
Gotcha, and agree, I'd missed what you meant by "capacities" of the model.
Reposted by Eryk Salvaggio
eryk.bsky.social
I think we agree on principle, but I'm thinking more that the data is translated into numerical representations (weights, biases) and the NN is the process through which that translation occurs. A handwritten "9" is not an image recognition network in itself.
eryk.bsky.social
Point is, if you think NNs are most efficient formula for optimization, you may see things like education, the arts, and politics as moving through training layers too. The end result of which will be the full optimization of society. So you throw random stuff out and see what people do with it.
eryk.bsky.social
I think it's pretty common for people to see the world through the lenses they work with (once you do cybernetics, everything is "just" a feedback loop; once you do pattern-finding, everything is "just" pattern-finding).
eryk.bsky.social
Striking parallel between how NNs are trained and how VCs imagine the world is built: start with noise, then remove noise through "optimization." There are assumptions not just that the machine "learns" as a human does but that the world will "learn" to "optimize AI" just as an NN "learns."
eryk.bsky.social
The extent to which biases in data influence biases in NN weights is pretty hard to ignore tbh, I still want to dig into any good faith arguments by those who disagree or challenge this. Most agree but seem to find it inconvenient, so don’t focus on it!
eryk.bsky.social
Yes, and my point is that shifting this disagreement from “debating interpretations” to “the other party is ignorant of basic architecture” can be counterproductive.
eryk.bsky.social
The "you don't understand how (AI/LLMs/Diffusion Models/NNs) work" posture in any debate is often just "we don't agree on how to interpret what (AI/LLMs/Diffusion Models/NNs) are doing"
eryk.bsky.social
Totally agree, and it's unfortunate that prompt-based AI music is literally forced to be navigated by describing narrowly constrained genres -- especially when "genre collision" is already so well established as to be its own genre.
Reposted by Eryk Salvaggio
mikeachim.bsky.social
Damn. This is amazing. £325 per week, paid monthly, for 3 years - and the result was a profit for the Irish economy:
www.citizensinformation.ie/en/employmen...
Post from Threads user rodneyowl: "Ireland has declared the Basic Income for Artists scheme permanent. This will be officially announced in tomorrow’s budget. Details to follow. Congratulations to all who fought for it and the present and future artists of all sorts in Ireland. That includes me 👌We’re just comin to the end of a 3 year pilot scheme. It’s been a roaring success. For every €1 paid out to the 2000 participants, the government got €1.46 back. Can’t argue with that. Other countries are already taking note."
eryk.bsky.social
I feel like that would create a kind of explosion of new experiments, no?
eryk.bsky.social
I think you have it. Seems like rising populist conservatism alongside consolidation of tech & entertainment industry at a moment where tech is also shifting its money into an endless GPU expansion … is a bad-timing situation that has stifled support for the arts by industry & philanthropy alike. :/
eryk.bsky.social
This makes sense! But we keep hearing the contrary re: 1968, for example. I suspect there may really be something to be said this time around about the entertainment industry consolidating production power, lots written on radio in the 21st century, streaming services etc may be on the same track?
eryk.bsky.social
But certainly there are also fallow periods. It’s just curious to me that technological and social upheaval seems to have exhausted everyone when everyone seems to suggest punk and rock came from the same conditions.
eryk.bsky.social
I understand it, but it’s worth noting: since the popular adoption of generative AI, culture feels like it’s been at a standstill. It might be my filters, but I rarely feel inspired by discussions of books, or catch great indie films, or come across unique and exciting (new) records anymore.
Reposted by Eryk Salvaggio
annakornbluh.bsky.social
"the solution is still the one that can succeed: to build a new cultural order, a new civilization. To do so, academics must embrace an unusual new role: as knowledge workers, they must seize the means of knowledge production."

www.publicbooks.org/academics-mu...
Academics Must Seize the Means of Knowledge Production - Public Books
Trumpism has canceled the knowledge society.
www.publicbooks.org
Reposted by Eryk Salvaggio
techpolicypress.bsky.social
While there is value to foresight and anticipating risks, writes Tech Policy Press fellow
@eryk.bsky.social, could the language used by AI risk communities to describe the technology contribute to the very problems it aims to curb?
The AI Safety Debate Needs AI Skeptics | TechPolicy.Press
The language used by AI risk communities to describe the technology may contribute to the very problems it aims to curb, Eryk Salvaggio writes.
www.techpolicy.press
eryk.bsky.social
Any organization considering the risks of AI should make room for those who believe the greatest risk is believing in it. In my latest for @techpolicypress.bsky.social, I propose that AI skeptics play a crucial role in discussing “AI safety.” www.techpolicy.press/the-ai-safet...
The AI Safety Debate Needs AI Skeptics | TechPolicy.Press
The language used by AI risk communities to describe the technology may contribute to the very problems it aims to curb, Eryk Salvaggio writes.
www.techpolicy.press