andrew buzzell
banner
andrewbuzzell.bsky.social
andrew buzzell
@andrewbuzzell.bsky.social
@uwo postdoc. social epistemology/technology/ethics
yes!
April 30, 2025 at 1:03 PM
it would be interesting if running could support a high quality independent and audience funded outlet like cyclists have in @escapecollective.bsky.social
April 19, 2025 at 8:01 PM
cite is from a natpost "review".

found this masterclass in misleading citation in the speccie:
April 8, 2025 at 2:29 PM
so we are somewhere in between embrace and extend but a bit before extinguish

en.m.wikipedia.org/wiki/Embrace...
Embrace, extend, and extinguish - Wikipedia
en.m.wikipedia.org
April 4, 2025 at 1:01 AM
that's super interesting! is it possible to share a bit more about this, i'd love to use it as an example in my class?
April 3, 2025 at 12:39 PM
and the new "i'm feeling lucky" standard
March 20, 2025 at 10:56 PM
Reposted by andrew buzzell
Apparently, Russian propaganda directly worked to secure the election of a specific candidate in the U.S. elections in 2024.
March 8, 2025 at 5:57 PM
it’s like a hack-and-leak but from the inside out.
February 27, 2025 at 8:48 PM
wow nice photo too!
February 27, 2025 at 2:16 AM
100%, just like the old google. (leaving to one side llms aren’t info retrieval systems). and the ux will devolve in the same way for the same reasons.
February 23, 2025 at 1:50 AM
one thing i puzzle over - where will the new sources come from?

"OpenAI says that Deep Research is trained to select solid, reputable sources"
February 8, 2025 at 12:57 PM
reminiscent of the effort aimed at the syria civil defence, which used youtube/twitter very effectively. much of what we know about how that worked was from the streaming api, which no longer exists. that this is still "dark magic" all these years later is such a failing.
February 8, 2025 at 12:41 PM
a betrayal of the distinguished tradition of subprime mortgage-backed security safety and radium health & beauty safety!
February 6, 2025 at 11:58 AM
i think you are right. somewhere in the mix of "what is special about the potential social risk of ai" is the affordances we are giving it as a regulative technology. l'etat c'est moi, translated to machine.
February 4, 2025 at 2:10 PM
Reposted by andrew buzzell
4/ The biggest surprise? Media indoctrination and civil liberties repression are the most predictive of autocratic survival. These findings have big implications for “information autocracies” in the digital age.
January 24, 2025 at 3:50 PM
reminded me of the “human encyclopedia” from frasier
January 25, 2025 at 6:59 PM