Francesco Salvi
@frasalvi.bsky.social
190 followers 140 following 14 posts
PhD Student @ Princeton CITP | Computational Social Science, NLP, Network Science, Politics | he/him | https://frasalvi.github.io/
Posts Media Videos Starter Packs
frasalvi.bsky.social
🌱✨ Life update: I just started my PhD at Princeton University!

I will be supervised by @manoelhortaribeiro.bsky.social and affiliated with Princeton CITP.

It's only been a month, but the energy feels amazing —very grateful for such a welcoming community. Excited for what’s ahead! 🚀
Reposted by Francesco Salvi
manoelhortaribeiro.bsky.social
Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.

In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.

arxiv.org/abs/2509.10776
frasalvi.bsky.social
✍️ I wrote a short piece for the #SPSPblog about our work on AI persuasion (w/ @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West).

Read it at: t.co/MipJKWbb1h.

Thanks @andyluttrell.bsky.social @prpietromonaco.bsky.social @spspnews.bsky.social for your invitation and feedback!
https://ow.ly/UicN50WTirh
t.co
Reposted by Francesco Salvi
hayoungjung.bsky.social
🚨YouTube is a key source of health info, but it’s also rife with dangerous myths on opioid use disorder (OUD), a leading cause of death in the U.S.

To understand the scale of such misinformation, our #EMNLP2025 paper introduces MythTriage, a scalable system to detect OUD myth🧵
Reposted by Francesco Salvi
icepfl.bsky.social
EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...
Reposted by Francesco Salvi
emollick.bsky.social
Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives

In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers on a quiz.
Reposted by Francesco Salvi
jmendelsohn2.bsky.social
📣 Super excited to organize the first workshop on ✨NLP for Democracy✨ at COLM @colmweb.org!!

Check out our website: sites.google.com/andrew.cmu.e...

Call for submissions (extended abstracts) due June 19, 11:59pm AoE

#COLM2025 #LLMs #NLP #NLProc #ComputationalSocialScience
NLP 4 Democracy - COLM 2025
sites.google.com
Reposted by Francesco Salvi
natureportfolio.nature.com
A study in Nature Human Behaviour finds that large language models (LLMs), such as GPT-4, can be more persuasive than humans 64% of the time in online debates when adapting their arguments based on personalised information about their opponents. go.nature.com/4j9ibyE 🧪
This is figure 1, which shows an overview of the experimental design.
Reposted by Francesco Salvi
technologyreview.com
Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.
AI can do a better job of persuading people than we do
OpenAI’s GPT-4 is much better at getting people to accept its point of view during an argument than humans are—but there’s a catch.
www.technologyreview.com
frasalvi.bsky.social
That raises urgent questions about possible misuse in political propaganda, misinformation, and election interference.

Platforms and regulators should seriously consider these risks and step up in our discussion about guardrails, transparency, and accountability.
frasalvi.bsky.social
📢📜 Excited to share that our paper "On the conversational persuasiveness of GPT-4" has been published in Nature Human Behaviour!

🤖 Key takeaway: LLMs can already reach superhuman persuasiveness, especially when given access to personalized information

www.nature.com/articles/s41...
On the conversational persuasiveness of GPT-4 - Nature Human Behaviour
Salvi et al. find that GPT-4 outperforms humans in debates when given basic sociodemographic data. With personalization, GPT-4 had 81.2% higher odds of post-debate agreement than humans.
www.nature.com
Reposted by Francesco Salvi
chrisnsimms.bsky.social
“Obviously as soon as people see that you can persuade people more with LLMs, they’re going to start using them. I find it both fascinating and terrifying,” says @frasalvi.bsky.social

Read more on persuasive chatbots in my rather terrifying piece for @nature.com 🧪

www.nature.com/articles/d41...
AI is more persuasive than people in online debates
When given information about its human opponents, the large language model GPT-4 was able to make particularly convincing arguments.
www.nature.com
Reposted by Francesco Salvi
scott-delaney.bsky.social
If your NSF grant has been terminated, please, please report it here:

airtable.com/appGKlSVeXni...

Collecting this information is supremely helpful to organize and facilitate a response.
Reposted by Francesco Salvi
ic2s2.bsky.social
🚨 #IC2S2’25 Call for Abstract deadline is just around the corner—Feb 24, 2025
Submit your abstract now: www.ic2s2-2025.org/submit-abstr... and join us in Norrköping, Sweden.
Tutorials announcement coming soon!
Reposted by Francesco Salvi
lajello.bsky.social
New tool to estimate the level of participation in collective action expressed in natural language.
Applied to social media, it can produce large-scale and granular estimates of behavior change wrt collective action.
github.com/ariannap13/e...
@nerdsitu.bsky.social @itu.dk @carlsbergfondet.dk
ariannapera.bsky.social
Excited to share the tool @lajello.bsky.social & I built to predict social media participation in collective action! It moves beyond keywords, tracking activism stages across topics. See it in action with climate activism on Reddit 🌱

Check it out: arxiv.org/abs/2501.07368

@nerdsitu.bsky.social
frasalvi.bsky.social
Just arrived in Trento for cs2italy.org, the first Italian conference on CSS: excited to see the Italian community gathering together!

🤖 I'll be presenting our work on AI persuasion [1] tomorrow morning at 11:15 to session 1A — come say hello!

[1] arxiv.org/abs/2403.14380
CS2Italy
Join the premier CS2 Italy Conference, a pivotal event for computational social scientists in Italy and internationally. Scheduled for 2025, this conference will feature interdisciplinary collaboratio...
cs2italy.org
Reposted by Francesco Salvi
Reposted by Francesco Salvi
msaeltzer.bsky.social
Before you all delete your accounts on X, you should consider deleting content but "donating" them to science. Many institutions, such as @gesis-dataservices.bsky.social might use them to scrape more effectively than via burner accounts.
Reposted by Francesco Salvi
marcelsalathe.bsky.social
What do Samy Bengio, Michael Bronstein (@mmbronstein.bsky.social), and Annie Hartely have in common, apart from being brilliant scientists?
They are now professors at EPFL. Welcome!!! 🤗🚀

actu.epfl.ch/news/appoint...
Appointment of EPFL professors
The Board of the Swiss Federal Institutes of Technology has announced the appointment of professors at EPFL.
actu.epfl.ch
frasalvi.bsky.social
GPT-4 is able to pass on average 91.7% of EPFL core courses, raising significant concerns about the vulnerability of higher education to AI assistants.

Timely large-scale study mobilising an army of scholars across EPFL, including my small contribution to the evaluation efforts ✍️

More below ⬇️
abosselut.bsky.social
1/ 📘 Could ChatGPT get an engineering degree? Spoiler, yes! In our new @pnas.org article, we explore how AI assistants like GPT-4 perform in STEM university courses — and on average they pass a staggering 91.7% of core courses. 🧵 #AI #HigherEd #STEM #LLMs #NLProc
Reposted by Francesco Salvi
manoelhortaribeiro.bsky.social
New @acm-cscw.bsky.social paper, new content moderation paradigm.

Post Guidance lets moderators prevent rule-breaking by triggering interventions as users write posts!

We implemented PG on Reddit and tested it in a massive field experiment (n=97k). It became a feature!

arxiv.org/abs/2411.16814