Aditya Vashistha
@imadityav.bsky.social
1.5K followers 56 following 33 posts
Assistant Professor at Cornell. Research in HCI4D, Social Computing, Responsible AI, and Accessibility. https://www.adityavashistha.com/
Posts Media Videos Starter Packs
imadityav.bsky.social
Big day today with Joy Ming graduating!! A new doctor in town! Can’t wait to see all the incredible things Joy will take on next. So proud!
Aditya and Joy wearing regalia and posing in front of a bright red background!
imadityav.bsky.social
Thank you to all our participants, co-organizers, student volunteers, funders, and partners who made this possible. And to Joy Ming for the beautiful visual summaries.
imadityav.bsky.social
Our conversations spanned:
🔷 Meaningful use cases of AI in high-stakes global settings
🔷 Interdisciplinary methods across computing and humanities
🔷 Partnerships between academia, industry, and civil society
🔷 The value of local knowledge, lived experiences, and participatory design
imadityav.bsky.social
Over three days, we explored what it means to design and govern pluralistic and humanistic AI technologies — ones that serve diverse communities, respect cultural contexts, and center social well-being. The summit was part of the Global AI Initiative at Cornell.
imadityav.bsky.social
Yesterday we wrapped up the Thought Summit on LLMs and Society at Cornell — an energizing and deeply reflective gathering of researchers, practitioners, and policymakers from across institutions and geographies.
imadityav.bsky.social
Thank you Dhanaraj for attending the Thought Summit and sharing your thoughts on how we can design AI for All!
thakurdhanaraj.bsky.social
It was great to contribute to discussions at this event organized by @imadityav.bsky.social and others at Cornell.
I was also fortunate to speak on a really interesting closing panel and look forward to supporting this work.
globalai.ai.cornell.edu/thought-summ...

#globalai #llms
Thought Summit: LLMs and Society | Global AI Initiative
globalai.ai.cornell.edu
imadityav.bsky.social
This was the week of reflection, new ideas, and a renewed sense of urgency to design AI systems that serve marginalized communities globally. Can't wait for what's next.
imadityav.bsky.social
Pragnya Ramjee presented work (with Mohit Jain at MSR India) on deploying LLM tools for community health workers in India. In collaboration with Khushi Baby, we show how thoughtful AI design can (and cannot) bridge critical informational gaps in low-resource settings.
dl.acm.org/doi/10.1145/...
ASHABot: An LLM-Powered Chatbot to Support the Informational Needs of Community Health Workers | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
imadityav.bsky.social
Ian René Solano-Kamaiko presented our study on how algorithmic tools are already shaping home care work—often invisibly. These systems threaten workers’ autonomy and safety, underscoring the need for stronger protections and democratic AI governance.
dl.acm.org/doi/10.1145/...
"Who is running it?" Towards Equitable AI Deployment in Home Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
imadityav.bsky.social
Joy Ming presented our award-winning paper on designing advocacy tools for home care workers. In this work, we unpack tensions between individual and collective goals and highlight how to use data responsibly in frontline labor organizing.
dl.acm.org/doi/10.1145/...
Exploring Data-Driven Advocacy in Home Health Care Work | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
dl.acm.org
imadityav.bsky.social
Dhruv presented our cross-cultural study on AI writing tools and their Western-centric biases. We found that AI suggestions disproportionately benefit American users and subtly nudge Indian users toward Western writing norms—raising concerns about cultural homogenization.
dl.acm.org/doi/10.1145/...
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
imadityav.bsky.social
Sharon Heung presented our work on personalizing moderation tools to help disabled users manage ableist content online. We showed how users want control over filtering and framing—while also expressing deep skepticism toward AI-based moderation.
dl.acm.org/doi/10.1145/...
"Ignorance is not Bliss": Designing Personalized Moderation to Address Ableist Hate on Social Media | Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
dl.acm.org
imadityav.bsky.social
Just wrapped up an incredible week at #CHI2025 in Yokohama with Joy Ming, @sharonheung.bsky.social, Dhruv Agarwal, and Ian René Solano-Kamaiko! We presented several papers that push the boundaries of what Globally Equitable AI could look like in high-stakes contexts.
Dhruv, Sharon, Aditya, Jiamin, Ian, and Joy in front of buildings and trees surrounded by a lush green landscape.
imadityav.bsky.social
Excited to see our research in The Atlantic and Fast Company!

Our work, presented at #CHI2025 this week, shows how AI writing suggestions often nudge people toward Western styles, unintentionally flattening cultural expression and nuance.
imadityav.bsky.social
Excited to be at #CHI2025 with Joy Ming, Sharon Heung, Dhruv Agarwal, and Ian Rene Solano-Kamaiko!

Our lab will be presenting several papers Globally Equitable AI — centering equity, culture, and inclusivity in high-stakes contexts. 🌎

If you’ll be there, would love to connect! 🖐️
imadityav.bsky.social
Huge congratulations to Mahika Phutane for leading this work, and Ananya Seelam for her contributions!

We’re thrilled to share this at ACM FAccT 2025.

Read the full paper: lnkd.in/eCsAupvK
imadityav.bsky.social
Our findings make a clear case: AI moderation systems must center disabled people’s expertise, especially when defining harm and safety.

This isn’t just a technical problem—it’s about power, voice, and representation.
imadityav.bsky.social
Disabled participants frequently described these AI explanations as “condescending” or “dehumanizing.”

The models reflect a clinical, outsider gaze—rather than lived experience or structural understanding.
imadityav.bsky.social
AI systems often underestimate ableism—even in clear-cut cases of discrimination or microaggressions.

And when they do explain their decisions? The explanations are vague, euphemistic, or moralizing.
imadityav.bsky.social
We studied how AI systems detect and explain ableist content—and how that compares to judgments from 130 disabled participants.

We also analyzed explanations from 7 major LLMs and toxicity classifiers. The gaps are stark.
Methodology of our paper, starting with creating a dataset containing ableist and non-ableist post, followed by collecting and analyzing ratings and explanations from AI models and disabled and non-disabled participants.
imadityav.bsky.social
Our paper, “Cold, Calculated, and Condescending”: How AI Identifies and Explains Ableism Compared to Disabled People, has been accepted at ACM FAccT 2025!

A quick thread on what we found:
The image of our Arxiv Photo with paper title and author list: Mahika Phutane, Ananya Seelam, and Aditya Vashistha
imadityav.bsky.social
Excited to be at @umich.edu this week to speak at Democracy’s Information Dilemma event, and at the Social Media and Society conference! Hard to believe it’s been since 2016 that I was last here. Can’t wait for engaging conversations, new ideas, and reconnecting with colleagues old and new!
Reposted by Aditya Vashistha
cornellbowers.bsky.social
NEW YEAR, NEW PLATFORM? 👀

Bowers is joining the Bluesky community! Follow to stay updated on technology innovation, collaborative research, and faculty expertise.