Rayhan Rashed
rayhan.io
Rayhan Rashed
@rayhan.io
Human-AI Interaction, Situated in Social Computing
Along the way, the work received recognition including Best Application of AI (Michigan #AI Symposium), Best Poster (Michigan #HCAI), and I also presented it at #Stanford Trust and Safety Conference and it sparked a lot of great conversations!

Paper, summary, and demo: rayhan.io/diymod
February 2, 2026 at 4:04 AM
This was a full end-to-end HCI project (needfinding → design → build → multi-round evaluation), and one of the most fun (and intense) things I’ve built from the ground up.
February 2, 2026 at 4:04 AM
Our CHI'26 paper + system transforms social media content in real-time, based on your definition of harm. Rather than removing content, it can obfuscate, re-render, or modify text and images, softening what a user wants to avoid while preserving what they still find valuable in the same post.
February 2, 2026 at 4:04 AM
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible?
What if that meant transforming content instead of suppressing it?
February 2, 2026 at 4:04 AM
When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.
February 2, 2026 at 4:04 AM
So we asked: what if moderation could act on personalized notions of harm, while preserving as much social and informational value as possible? What if that meant transforming content instead of suppressing it?

3/n
February 2, 2026 at 3:56 AM
When platforms decide globally what’s "safe", they end up doing two things at once: failing to protect users from what they find harmful, and bluntly suppressing content that others would want to engage with.

2/n
February 2, 2026 at 3:55 AM