Thomas Davidson
banner
thomasdavidson.bsky.social
Thomas Davidson
@thomasdavidson.bsky.social
Sociologist at Rutgers. Studies far-right politics, populism, and hate speech. Computational social science.

https://www.thomasrdavidson.com/
First 50 downloads are free if you use this link: www.tandfonline.com/eprint/AYC6H...
February 3, 2026 at 3:08 PM
Particularly if academics block each other for engaging in legitimate discussions about contested issues
January 6, 2026 at 6:06 PM
On the consent front, I think the use of LLMs to create more bespoke, even "individualized" instruments raises new ethical questions that warrant discussion. Seeing how polarizing the topic has become, I expect we'll see a lot more acrimonious debate before any consensus emerges
January 6, 2026 at 6:05 PM
Thanks, Rohan. Looking forward to catching up in Toronto in the spring!
December 15, 2025 at 3:16 PM
Overall, these results show that MLLMs can make more context-sensitive moderation decisions than text-based classifiers. But these systems still make mistakes, and context can cut both ways, eliminating some biases while enabling others. Human oversight remains essential if deployed for moderation.
December 15, 2025 at 3:04 PM
Additionally, some models are overtly biased and are particularly sensitive to visual identity cues (AI-generated profile pictures). This demonstrates how different data modalities lead to varying levels of algorithmic bias.
December 15, 2025 at 3:04 PM
When considering the identity of the author, some MLLMs make context-sensitive judgments comparable to human subjects. e.g., less likely to flag Black users for using reclaimed slurs, a common false positive. But the results also reveal less normative decisions regarding so-called "reverse racism".
December 15, 2025 at 3:04 PM
I find that MLLMs follow a consistent hierarchy of offensive language to humans and show similarities across other attributes. There is heterogeneity across models, particularly the smallest open-weights versions.
December 15, 2025 at 3:04 PM