Dinesh Ayyappan
dineshayyappan.bsky.social
Dinesh Ayyappan
@dineshayyappan.bsky.social
PhD @ Pompeu Fabra in AI Fairness
dineshayyappan.com
Former teacher
Georgia Tech, Boston Teacher Residency, Carnegie Mellon
US, India, Singapore, Spain
November 4, 2025 at 9:15 PM
It seems like what happened is that Newsom signed a less restrictive bill with basic protections and vetoed the stricter bill within the same day. I'm just catching up on this, but it sounds like a win for the chatbot companies.
October 14, 2025 at 6:05 AM
Reposted by Dinesh Ayyappan
The takeaway:

👉 Accommodating the radical right on immigration doesn’t win back voters.
👉 It alienates the progressive base.
👉 And it raises the salience of the very issue the radical right owns.
In short: it’s electoral self-harm.
September 5, 2025 at 6:50 AM
Reposted by Dinesh Ayyappan
"this thing is very good at a narrow subset of tasks it has optimized to be good at, often fails catastrophically at things outside its remit, and is always confident that it's right" feels pretty PhD-level intellect to me!
August 8, 2025 at 2:52 AM
If it’s a slow motion train wreck, then does Gen AI delay or slow down the wreck at all? If teacher workloads are actually being reduced by outsourcing some of their job to GenAI, then maybe we will leave the profession a little more slowly. Or maybe we will just get more, other work to do. 🤷🏽‍♂️
August 8, 2025 at 6:03 PM
Thanks for sharing! After this explanation, what do your audiences want to clarify or learn next? Any memorable questions or trends?
August 8, 2025 at 4:06 AM
… like you mentioned in your limitations. Lots to ponder! Thanks for sharing ☺️
June 4, 2025 at 8:33 AM
Really fresh perspective, Peter! Helpful vocabulary and theory for having this (important) conversation about embracing Gen AI ‘flaws’. Thinking about applications to teaching and also how this could be used to add friction that disrupts Type-1 thinking in contexts when Type-2 is preferred…
June 4, 2025 at 8:33 AM
Reposted by Dinesh Ayyappan
“I can’t let you do that, Dave” but as a feature
May 22, 2025 at 9:47 PM
Sorry to miss you at CHI! Just emailed you :)
May 14, 2025 at 8:23 AM
Nice experiment! Would love to chat, especially about education-related contexts!
April 25, 2025 at 11:02 AM
As people's lives are increasingly being influenced by language models (with or without their consent), AI safety is more important now than ever. I'm sad that some of this research is losing support in the US, but I am also relieved to work in an ecosystem where trustworthy AI is a top priority.
April 24, 2025 at 3:01 AM
This is a really interesting experiment -- thanks for sharing! If I may ask: How did you choose the model to work with (and why one)?
April 8, 2025 at 2:45 PM