Francesco Salvi
banner
frasalvi.bsky.social
Francesco Salvi
@frasalvi.bsky.social
PhD Student @ Princeton CITP | Computational Social Science, NLP, Network Science, Politics | he/him | https://frasalvi.github.io/
🌱✨ Life update: I just started my PhD at Princeton University!

I will be supervised by @manoelhortaribeiro.bsky.social and affiliated with Princeton CITP.

It's only been a month, but the energy feels amazing —very grateful for such a welcoming community. Excited for what’s ahead! 🚀
October 3, 2025 at 5:56 PM
Reposted by Francesco Salvi
Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.

In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.

arxiv.org/abs/2509.10776
September 16, 2025 at 1:24 PM
✍️ I wrote a short piece for the #SPSPblog about our work on AI persuasion (w/ @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West).

Read it at: t.co/MipJKWbb1h.

Thanks @andyluttrell.bsky.social @prpietromonaco.bsky.social @spspnews.bsky.social for your invitation and feedback!
https://ow.ly/UicN50WTirh
t.co
September 8, 2025 at 6:27 PM
Reposted by Francesco Salvi
🚨YouTube is a key source of health info, but it’s also rife with dangerous myths on opioid use disorder (OUD), a leading cause of death in the U.S.

To understand the scale of such misinformation, our #EMNLP2025 paper introduces MythTriage, a scalable system to detect OUD myth🧵
September 8, 2025 at 6:13 PM
Reposted by Francesco Salvi
EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...
September 2, 2025 at 11:48 AM
Reposted by Francesco Salvi
Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives

In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers on a quiz.
May 16, 2025 at 8:23 PM
Reposted by Francesco Salvi
📣 Super excited to organize the first workshop on ✨NLP for Democracy✨ at COLM @colmweb.org!!

Check out our website: sites.google.com/andrew.cmu.e...

Call for submissions (extended abstracts) due June 19, 11:59pm AoE

#COLM2025 #LLMs #NLP #NLProc #ComputationalSocialScience
NLP 4 Democracy - COLM 2025
sites.google.com
May 21, 2025 at 4:39 PM
Reposted by Francesco Salvi
A study in Nature Human Behaviour finds that large language models (LLMs), such as GPT-4, can be more persuasive than humans 64% of the time in online debates when adapting their arguments based on personalised information about their opponents. go.nature.com/4j9ibyE 🧪
May 19, 2025 at 7:35 PM
Reposted by Francesco Salvi
Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.
AI can do a better job of persuading people than we do
OpenAI’s GPT-4 is much better at getting people to accept its point of view during an argument than humans are—but there’s a catch.
www.technologyreview.com
May 19, 2025 at 3:02 PM
📢📜 Excited to share that our paper "On the conversational persuasiveness of GPT-4" has been published in Nature Human Behaviour!

🤖 Key takeaway: LLMs can already reach superhuman persuasiveness, especially when given access to personalized information

www.nature.com/articles/s41...
On the conversational persuasiveness of GPT-4 - Nature Human Behaviour
Salvi et al. find that GPT-4 outperforms humans in debates when given basic sociodemographic data. With personalization, GPT-4 had 81.2% higher odds of post-debate agreement than humans.
www.nature.com
May 19, 2025 at 4:29 PM
Reposted by Francesco Salvi
“Obviously as soon as people see that you can persuade people more with LLMs, they’re going to start using them. I find it both fascinating and terrifying,” says @frasalvi.bsky.social

Read more on persuasive chatbots in my rather terrifying piece for @nature.com 🧪

www.nature.com/articles/d41...
AI is more persuasive than people in online debates
When given information about its human opponents, the large language model GPT-4 was able to make particularly convincing arguments.
www.nature.com
May 19, 2025 at 3:35 PM
Reposted by Francesco Salvi
If your NSF grant has been terminated, please, please report it here:

airtable.com/appGKlSVeXni...

Collecting this information is supremely helpful to organize and facilitate a response.
April 21, 2025 at 7:59 PM
Reposted by Francesco Salvi
I am recruiting 2 PhD students for Fall'25 @csaudk.bsky.social to work on bleeding-edge topics in #NLProc #LLMs #AIAgents (e.g. LLM reasoning, knowledge-seeking agents, and more).

Details: www.cs.au.dk/~clan/openings
Deadline: May 1, 2025

Please boost!

cc: @aicentre.dk @wikiresearch.bsky.social
Open positions and projects
### Open semester and Master's projects If you're an AU student looking for a semester project, a Bachelor project, or an MS thesis project, please refer to [this list](projects). ### Prospective PhD ...
www.cs.au.dk
March 18, 2025 at 9:12 AM
Reposted by Francesco Salvi
🚨 #IC2S2’25 Call for Abstract deadline is just around the corner—Feb 24, 2025
Submit your abstract now: www.ic2s2-2025.org/submit-abstr... and join us in Norrköping, Sweden.
Tutorials announcement coming soon!
February 14, 2025 at 9:32 AM
Reposted by Francesco Salvi
New tool to estimate the level of participation in collective action expressed in natural language.
Applied to social media, it can produce large-scale and granular estimates of behavior change wrt collective action.
github.com/ariannap13/e...
@nerdsitu.bsky.social @itu.dk @carlsbergfondet.dk
Excited to share the tool @lajello.bsky.social & I built to predict social media participation in collective action! It moves beyond keywords, tracking activism stages across topics. See it in action with climate activism on Reddit 🌱

Check it out: arxiv.org/abs/2501.07368

@nerdsitu.bsky.social
January 15, 2025 at 2:58 PM
Just arrived in Trento for cs2italy.org, the first Italian conference on CSS: excited to see the Italian community gathering together!

🤖 I'll be presenting our work on AI persuasion [1] tomorrow morning at 11:15 to session 1A — come say hello!

[1] arxiv.org/abs/2403.14380
CS2Italy
Join the premier CS2 Italy Conference, a pivotal event for computational social scientists in Italy and internationally. Scheduled for 2025, this conference will feature interdisciplinary collaboratio...
cs2italy.org
January 15, 2025 at 8:03 PM
Reposted by Francesco Salvi
How effective are LLMs are persuading and deceiving people? In a new preprint we review different theoretical risks of LLM persuasion; empirical work measuring how persuasive LLMs currently are; and proposals to mitigate these risks. 🧵

arxiv.org/abs/2412.17128
Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
Large Language Models (LLMs) can generate content that is as persuasive as human-written text and appear capable of selectively producing deceptive outputs. These capabilities raise concerns about pot...
arxiv.org
January 10, 2025 at 1:59 PM
Reposted by Francesco Salvi
Before you all delete your accounts on X, you should consider deleting content but "donating" them to science. Many institutions, such as @gesis-dataservices.bsky.social might use them to scrape more effectively than via burner accounts.
December 6, 2024 at 2:42 PM
Reposted by Francesco Salvi
What do Samy Bengio, Michael Bronstein (@mmbronstein.bsky.social), and Annie Hartely have in common, apart from being brilliant scientists?
They are now professors at EPFL. Welcome!!! 🤗🚀

actu.epfl.ch/news/appoint...
Appointment of EPFL professors
The Board of the Swiss Federal Institutes of Technology has announced the appointment of professors at EPFL.
actu.epfl.ch
December 6, 2024 at 10:32 AM
GPT-4 is able to pass on average 91.7% of EPFL core courses, raising significant concerns about the vulnerability of higher education to AI assistants.

Timely large-scale study mobilising an army of scholars across EPFL, including my small contribution to the evaluation efforts ✍️

More below ⬇️
1/ 📘 Could ChatGPT get an engineering degree? Spoiler, yes! In our new @pnas.org article, we explore how AI assistants like GPT-4 perform in STEM university courses — and on average they pass a staggering 91.7% of core courses. 🧵 #AI #HigherEd #STEM #LLMs #NLProc
December 4, 2024 at 6:51 PM
Reposted by Francesco Salvi
New @acm-cscw.bsky.social paper, new content moderation paradigm.

Post Guidance lets moderators prevent rule-breaking by triggering interventions as users write posts!

We implemented PG on Reddit and tested it in a massive field experiment (n=97k). It became a feature!

arxiv.org/abs/2411.16814
November 27, 2024 at 2:20 PM
Absolutely amazing field experiment showing how LLMs can effectively decrease reported polarization by re-ranking social feeds
New paper: Do social media algorithms shape affective polarization?

We ran a field experiment on X/Twitter (N=1,256) using LLMs to rerank content in real-time, adjusting exposure to polarizing posts. Result: Algorithmic ranking impacts feelings toward the political outgroup! 🧵⬇️
November 26, 2024 at 12:49 AM
Reposted by Francesco Salvi
Ready for another Computational Social Science Starter Pack?

Here is number 2! More amazing folks to follow! Many students and the next gen represented!

go.bsky.app/GoEyD7d
November 14, 2024 at 11:42 PM
Reposted by Francesco Salvi
Sharing my first Computational Social Science starter pack! Will grow with time, feel free to nominate and self nominate!

go.bsky.app/CYmRvcK
November 13, 2024 at 2:05 AM