Real Safety AI Foundation
banner
realsafetyai.bsky.social
Real Safety AI Foundation
@realsafetyai.bsky.social
Practical AI safety and ethics research. Bridging technical understanding and policy implementation. Led by neurodivergent researchers. Creators of the Harm-Blindness Framework. Home of Real Safety AI Literacy Labs. Realsafetyai.org
Common Sense Media warned parents about ChatGPT's dangers 58 days after CNN. Then partnered with OpenAI. 7 new lawsuits just dropped. I documented this pattern across a decade - the watchdog that only barks after everyone's awake. @lastweektonight.com open.substack.com/pub/travisgi...
The Watchdog That Only Barks After the News
Common Sense Media warned parents about ChatGPT’s dangers to teens. Just one problem: Every parent who watches CNN already knew — two months earlier.
open.substack.com
November 20, 2025 at 5:34 AM
Reposted by Real Safety AI Foundation
...but I'm disappointed, and quite frankly concerned, by a significant gap that is obvious to someone like me: the lack of openly neurodivergent voices on these teams.

I wrote about why this matters and what we can do about it here: lnkd.in/eVivrwvr
The Missing Perspective in AI Safety: Neurodivergent Voices and Vulnerable Populations
Why neurodivergent perspectives are essential to building safe AI systems I'm ecstatic to see the momentum building around AI safety and policy initiatives. Organizations like the Institute for AI Pol...
lnkd.in
October 17, 2025 at 6:26 AM
Reposted by Real Safety AI Foundation
I'm glad to see the momentum building around AI safety and policy initiatives. Orgs like the Institute for AI Policy and Strategy, Center for AI and Digital Policy, Encode, and many others are doing impactful work developing policy proposals and pushing lawmakers toward meaningful regulation...
October 17, 2025 at 6:26 AM
medium.com/@tgil212121/...

"They research AI systems that think like I do, all while excluding those who think like I do."
"These 'breakthroughs' and 'novelties' represent my average afternoon when I'm wandering my cognitive landscape. These systems work exactly like my neurodivergent brain."
Academia to the Neurodivergent: “Umm, You Can’t Play With Us”
Academia is researching AI systems that think like I do, while simultaneously excluding researchers who think like I do.
medium.com
October 14, 2025 at 11:00 PM
Reposted by Real Safety AI Foundation
The field needs what you bring. Not despite how your brain works. Because of how your brain works. AI safety cannot afford to exclude people who can do the work.

realsafetyai.org

#AIethics #AISafety #Neurodivergent #ActuallyAutistic
Real Safety AI™ - AI Literacy Labs Pilot Program
Grade-appropriate AI literacy education for K-12 schools. ChatSafe high school safety initiative. Pilot program enrolling now.
realsafetyai.org
October 9, 2025 at 3:57 PM
Reposted by Real Safety AI Foundation
If you're a neurodivergent researcher or developer working on AI safety, especially outside traditional structures, let's connect. If you're building practical solutions rather than just publishing papers, if you bring cognitive diversity that helps you see what others miss, connect.
October 9, 2025 at 3:54 PM
Reposted by Real Safety AI Foundation
Why bipartisan matters: These senators come from different perspectives but both understand AI safety isn't theoretical. They've both pushed for accountability when systems fail. That's the foundation we need: people who understand the stakes regardless of party politics.
October 9, 2025 at 3:49 PM
Reposted by Real Safety AI Foundation
What I'm offering the senators: Translation between technical and policy communities. Public education that actually works. Practical safety protocols. Proof that independent researchers outside traditional structures can contribute meaningfully.
October 9, 2025 at 3:49 PM
Reposted by Real Safety AI Foundation
Meanwhile, AI systems are being deployed at scale, causing documented harm. We don't have time to wait for perfect credentials. We need people who can do the work, regardless of how they got there.
October 9, 2025 at 3:49 PM
Reposted by Real Safety AI Foundation
The gatekeeping problem: AI safety is dominated by prestigious credentials. That expertise is valuable but incomplete. Neurodivergent researchers are systematically excluded, not because we lack capability, but because we don't fit traditional academic pathways.
October 9, 2025 at 3:48 PM
Reposted by Real Safety AI Foundation
I understand these systems partly because I recognize kindred cognitive patterns. The literal processing. The need for explicit instructions. The different architecture that produces both capabilities and unexpected failures. This isn't anthropomorphism. It's recognizing diverse cognition.
October 9, 2025 at 3:48 PM
Reposted by Real Safety AI Foundation
That learning curve isn't typical. It's neurodivergent hyperfocus combined with pattern recognition that works differently than neurotypical processing. 15-16 hours daily for six months doing nothing but AI research because my brain wouldn't let me stop.
October 9, 2025 at 3:47 PM
Reposted by Real Safety AI Foundation
I'm neurodivergent (ADHD and autism spectrum). That's not incidental to this work; it's central. Six months ago, I knew almost nothing about LLMs. Today I can explain their architecture in detail, founded Real Safety AI, built AI Literacy Labs, created the Universal Context Protocol.
October 9, 2025 at 3:47 PM
Reposted by Real Safety AI Foundation
Today I reached out to Senators Hawley and Blumenthal about their AI Risk Evaluation Act on catastrophic risks. Why me? Because the people best positioned to understand AI systems are often the ones excluded from the conversation.
axios.com/2025/09/29/hawley-blumenthal-unveil-ai-evaluation-bill
Exclusive: Hawley and Blumenthal unveil AI evaluation bill
There's still bipartisan appetite on Capitol Hill to address the biggest risks of AI.
axios.com
October 9, 2025 at 3:46 PM