Isabel Silva Corpus
banner
isabelcorpus.bsky.social
Isabel Silva Corpus
@isabelcorpus.bsky.social
Information Science PhD student at Cornell
isabelsilvacorpus.github.io
Has been lots of fun working on this with a great team, thanks to @eegilbert.org, @allisonkoe.bsky.social, and @informor.bsky.social!
December 1, 2025 at 8:32 PM
We speculate that the AI tool improved writer productivity and petition completion rates. However, was this worth the platform-level increase in text homogeneity and decrease in outcomes? The paper adds some reflections and potential explanations arxiv.org/abs/2511.13949. WIP, comments welcome! 8/8
Introducing AI to an Online Petition Platform Changed Outputs but not Outcomes
The rapid integration of AI writing tools into online platforms raises critical questions about their impact on content production and outcomes. We leverage a unique natural experiment on Change$.$org...
arxiv.org
December 1, 2025 at 8:32 PM
Again, petitions written with AI access were stylistically different (e.g. longer), but did not seem to have higher chances of positive outcomes (e.g. signatures). 7/
December 1, 2025 at 8:32 PM
We confirmed these trends by comparing repeat-petition writers who wrote 1 petition pre and 1 post in-platform AI access against a baseline of writers w/ 2 petitions pre- or post-AI. If AI is helpful, we’d see a higher rate of success than expected for the split writers' second petition. 6/
December 1, 2025 at 8:32 PM
However, petition outcomes did not improve, and by some measures worsened: the share of petitions that reach minimum thresholds of comments and signatures decreased modestly when writers had access to the AI tool. 5/
December 1, 2025 at 8:32 PM
Petition text also became more homogenous: the average pairwise similarity of petition text increased. 4/
December 1, 2025 at 8:32 PM
Lexical features changed: petitions written with access to AI were longer, with more complex and varied vocabularies. Petition language shifted: in the pre-AI period petition titles tended to use short verbs like “let” and “stop”; after AI introduction we saw the rise of “implement” and “urge”. 3/
December 1, 2025 at 8:32 PM
To do this, we scraped 1.5 million petitions and leveraged the delayed release of the AI tool in Australia to estimate the causal impact of access to in-platform AI on petition text and outcomes (as measured through signatures and comments). 2/
December 1, 2025 at 8:32 PM
@ziv-e.bsky.social's talk showed the gaps in which users are recommended content that is aligned with their values. This work definitely points to the importance of user agency in feeds!
I couldn’t find the link to this paper, but some related work from Ziv here:
🔗: arxiv.org/pdf/2509.14434
arxiv.org
November 18, 2025 at 2:54 PM
They found that while increasingly accurate LLM’s map to increasingly accurate responses, the effects plateau around 80%. Great insight into how decision supports require active design to best support people and their goals. Looking forward to reading this paper when it’s out!!
November 18, 2025 at 2:54 PM
@jennahgosciak.bsky.social showed how LLM assistance can improve government caseworker accuracy in the context of SNAP eligibility questions. It was really cool to see Jennah + team get ahead of the ever-shifting technical capacity of LLMs by varying chatbot accuracy.
November 18, 2025 at 2:54 PM
Sherry Jueyu Wu showed that when people participate in collective decision-making, they are more willing to express that the gov needs improvement. Interesting to think about in the context of participation and accountability on online platforms...
🔗: www.nature.com/articles/s41...
A large-scale field experiment on participatory decision-making in China - Nature Human Behaviour
Wu et al. show that involving citizens in local decision-making (participatory budgeting) improves civic engagement in a Chinese context.
www.nature.com
November 18, 2025 at 2:54 PM
This is a really cool work demonstrating the value of human expertise in sociotechnical decision-making processes!
🔗: arxiv.org/pdf/2505.13325
arxiv.org
November 18, 2025 at 2:54 PM
@utopianturtle.top presented a causal framework for modeling algorithmically assisted decision-making, which the authors use to identify the ways academic advisors leverage non-algorithmic knowledge, and where advisors’ holistic approaches contribute to improved student outcomes.
arxiv.org
November 18, 2025 at 2:54 PM