Josh Goldstein
joshagoldstein.bsky.social
Josh Goldstein
@joshagoldstein.bsky.social
Research Fellow @Georgetown's Center for Security and Emerging Technology (CSET).

Studying emerging tech, national security, and online manipulation. Trying to bridge the academia/policy divide.

https://cset.georgetown.edu/staff/josh-a-goldstein/
The piece also includes two ideas for relatively low-cost, but perhaps meaningful, ways to expand reporting.

Curious for any reactions from T&S practitioners about feasibility.

Thanks to @justinhendrix.bsky.social for the quick edit. First time writing for TPP and would recommend to others!
December 17, 2024 at 4:32 PM
One @aaron.bsky.team line that struck me on Bluesky's labeling features:

"It's premature to say the experiment has succeeded or failed when we are so rapidly growing everything at the same time."
December 5, 2024 at 6:31 PM
From a misuse perspective (i.e., disinfo/influence ops)

Persuasiveness compared to content from real campaigns:

academic.oup.com/pnasnexus/ar...

A field study by Kreps/Kriner on forging constituent mail:
journals.sagepub.com/doi/abs/10.1...

Linvill/Warren and their team have recent cases studies
How persuasive is AI-generated propaganda?
Abstract. Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment
academic.oup.com
December 5, 2024 at 12:04 PM
✋✋
December 4, 2024 at 12:33 AM
Comments based on my work with @noupside.bsky.social in the HKS Misinformation Review on the use of AI-generated images by spammers/scammers/creators for audience growth.

misinforeview.hks.harvard.edu/article/how-...
How spammers and scammers leverage AI-generated images on Facebook for audience growth | HKS Misinformation Review
Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false informatio...
misinforeview.hks.harvard.edu
December 3, 2024 at 9:29 PM
I’ve been waiting (/hoping?) for this one! 👏👏
December 3, 2024 at 9:04 PM
Haven’t seen much in terms of quantitative assessments of impact, but UT Austin put together a set of international case studies: mediaengagement.org/research/gen...
Generative Artificial Intelligence and Elections
The Center for Media Engagement investigates GenAI’s role before, during, and after several key global elections in 2024.
mediaengagement.org
November 30, 2024 at 4:22 AM
Just read your testimony and it’s a great overview of a bunch of different threat vectors.

Planning to add to my course syllabus for the spring. Thanks!
November 20, 2024 at 11:35 PM
Free for a coffee during the day on the 27th?

I'll be back in NY for a few days & would be keen to chat about silicon sampling in social science (opportunities, risks) in the vein of your post earlier. I have an ongoing collaborative project in that area as well.
November 20, 2024 at 10:57 PM