Humane Intelligence
banner
humaneintelligence.bsky.social
Humane Intelligence
@humaneintelligence.bsky.social
We are an AI evaluations nonprofit dedicated to breaking down barriers to AI deployment for social good.
Learn more: www.humane-intelligence.org
Following our 2024 TFGBV red-teaming work with UNESCO, our Playbook on red teaming AI for social good is now available in French and Spanish! Sharing as 16 Days of Activism begins.

EN: unesdoc.unesco.org/ark:/48223/p...
FR: unesdoc.unesco.org/ark:/48223/p...
SP: unesdoc.unesco.org/ark:/48223/p...
November 25, 2025 at 8:05 PM
We are seeking volunteers to help redesign the user interface of our TFGBV taxonomy and ontology website.
We welcome contributions at any level, whether that is proposing new workflows, creating wireframes or prototypes, or building a front end.

Learn more: humane-intelligence.org/get-involved...
November 20, 2025 at 7:59 PM
AI in public health remains one of the most overlooked areas in the current wave of AI investment. In her op ed for Security Brief TechDay, Mala Kumar, explains how Humane Intelligence is working to address it through our new AI in Public Health Working Group.

securitybrief.com.au/story/why-ai...
Why AI in public health needs focus, funding, and community voice
A new working group urges more funding and community input to harness AI’s potential in public health, addressing social factors and equity gaps worldwide.
securitybrief.com.au
November 20, 2025 at 5:03 PM
Today is the last day to submit a response to our bias bounty on accessibility!

Please find here all relevant info: humane-intelligence.org/programs-ser...
Bias Bounty Challenge Set 4Improving Accessibility in Digital Conferencing Facilities
Humane Intelligence, in collaboration with CoNA Lab and Valence AI, has launched a bias bounty challenge focused on improving accessibility for neurodivergent users in virtual meeting platforms like Z...
humane-intelligence.org
November 14, 2025 at 6:10 PM
According to a recent CNBC article, people with ADHD, autism, and dyslexia report that AI assistants are helping them thrive at work.

That’s exactly what our Bias Bounty 4 is all about!

- Submissions close this Friday!
- Join here: lnkd.in/gfdtHTij
- Read the article: lnkd.in/erq96y-8
November 12, 2025 at 3:11 PM
Announcing a new working group on AI in Public Health!
Humane Intelligence is collaborating with Rumi Chunara, who leads New York University’s Center for Health Data Science, to launch a new working group focused on exploring the role of AI in public health.

humane-intelligence.org/post/join-ou...
Join our new working group - AI in Public Health! - Humane Intelligence
Join our new AI in public health working group, in partnership with NYU's Center for Center for Health Data Science (CHDS)
humane-intelligence.org
November 11, 2025 at 2:46 PM
Deadline extended! Submissions for the Accessibility Bias Bounty Challenge are now due November 14 at 11:59 PM ET.

With Design and Data Science tracks and a $6,000 prize pool, participants are invited to build AI tools that prioritize impacted communities.

humane-intelligence.org/programs-ser...
October 30, 2025 at 1:35 PM
There’s still time to join Bias Bounty 4: Accessibility in Digital Conferencing Facilities!

- Design and Data Science tracks
- $6,000 prize pool
- Contribute to a growing community of practice solving technical bias challenges

Submissions close 11/7.

humane-intelligence.org/programs-ser...
October 21, 2025 at 1:56 PM
Annie Brown, founder of Reliabl and Humane Intelligence’s Bias Bounty Data Scientist, joins Brent O. Phillips on the Humanitarian AI Today Voices flashpod to discuss our Accessibility Bias Bounty Challenge Set, open now through November 7, 2025.

Tune in here:
soundcloud.com/humanitarian...
Annie Brown from Humane Intelligence on their Bias Bounty Program
Voices is a new mini-series from Humanitarian AI Today. In daily five-minute flashpods we pass the mic to humanitarian experts and technology pioneers, to hear about new projects, events, and perspect
soundcloud.com
October 15, 2025 at 1:31 PM
In a piece for the Columbia Climate School, Marco Tedesco explores this question through the lens of our recent UNGA80 side event, “Honest Discussions at the Intersection of AI and the SDGs,” co-hosted by Humane Intelligence, Technology Salon and Compiler.

lnkd.in/g9vGveVd
October 13, 2025 at 3:25 PM
As we consider the current and future impact of AI, we’re proud to join forces with @partnershipai.bsky.social, a coalition bringing diverse voices together to ensure developments in AI benefit people and society. Learn more about our partnership:
partnershiponai.org/partnership-...
October 9, 2025 at 4:10 PM
We’re proud to celebrate Dr. Julie Hollek, Humane Intelligence’s Head of Engineering, for being named to the CTO Craft 100, recognizing 100 technology leaders shaping the future of the industry!

Congratulations, Julie, on this well-deserved recognition!
See the full list : lnkd.in/gnd3wQ6p
October 7, 2025 at 3:55 PM
Today we are launching the Accessibility in Digital Conferencing Facilities Bias Bounty Challenge in partnership with the CoNA Lab at Virginia State University and Valence AI.

- Design and Data Science tracks
- $6,000 prize pool

Submissions close 11/7

humane-intelligence.org/programs-ser...
October 6, 2025 at 12:55 PM
Want to learn more about our work, get involved in our latest Bias Bounty, or explore volunteer opportunities with Humane Intelligence?

Join one of our upcoming Open Community Calls!

📅 Early Edition: 9:00–10:00 AM ET
lnkd.in/gHweJHnE
📅 Late Edition: 12:00–1:00 PM ET
lnkd.in/gEvKmvYX
October 2, 2025 at 7:08 PM
At #UNGA80, Humane Intelligence, Technology Salon, and Compiler co-hosted a side-event on Honest Discussions at the Intersection of AI and the SDGs.

Watch the full recap and livestream here: humane-intelligence.org/.../unga-80-...
September 29, 2025 at 1:47 PM
Join us at Columbia University on September 29, 2025 (5–7 PM ET) for a discussion of UNDP’s Human Development Report 2025: A Matter of Choice — People and Possibilities in the Age of AI.

Register here: lnkd.in/gNAFp9pj
More events: humane-intelligence.org/get-involved...
September 25, 2025 at 1:28 PM
At TrustCon 2025, Humane Intelligence and Columbia’s Technology & Democracy Initiative co-facilitated a hands-on red teaming workshop that stress-tested AI chatbots in educational and therapeutic scenarios.

This article shares what we learned.

www.techpolicy.press/red-teaming-...
Red Teaming Generative AI in Classrooms and Beyond | TechPolicy.Press
Jen Weedon, Theodora Skeadas, and Sarah Amos explore AI in classrooms, and leanings from red-teaming to build safer, more trustworthy chatbots for students.
www.techpolicy.press
September 19, 2025 at 5:23 PM
We’re proud to celebrate Humane Intelligence’s co-founder and current Distinguished Advisor, Dr. Rumman Chowdhury, for being named to Observer’s 2025 A.I. Power Index, a recognition of the leaders shaping the future of artificial intelligence.

observer.com/list/2025-ai...
@ruchowdh.bsky.social
100 Leaders Shaping the Future of Artificial Intelligence
They write the script that the rest of us follow.
observer.com
September 18, 2025 at 1:39 PM
What does it take to meaningfully stress-test AI systems around the world?
Our Chief of Staff, Theodora Skeadas, reflects on three of her favorite red teaming events from last year.
Read the full blogpost here: humane-intelligence.org/post/reflect...
Reflecting on three of my favorite red teaming events from this past year - Humane Intelligence
EVENT 1: Red teaming with NIST Humane Intelligence, supported by the U.S. National Institute of […]
humane-intelligence.org
September 11, 2025 at 3:59 PM
We’re thrilled to welcome Mala Kumar as Interim ED of Humane Intelligence, with Dr. Rumman Chowdhury continuing as Distinguished Advisor. Learn about our strategy + fundraising priorities on our new site: humane-intelligence.org
September 8, 2025 at 5:06 PM
Reposted by Humane Intelligence
We're co-hosting a special UNGA side event along with @humaneintelligence.bsky.social and meowtree.bsky.social of Technology Salon NY on 9/16 in NYC. Spend an evening with us digging into the big issues around how AI will affect the UN's SDGs.

Register to attend here: forms.gle/i2KM8roxJSUi...
August 18, 2025 at 1:09 PM
What does it really mean to build unbiased AI?
In this new piece from The Washington Post, @ruchowdh.bsky.social offers a clear reminder: political neutrality in AI models isn’t real.

Read the full article: www.washingtonpost.com/technology/2...
Trump is targeting ‘woke AI.’ Here’s what that means.
The president’s executive order could reshape chatbots’ politics. But experts say training AI models to be neutral is easier said than done.
www.washingtonpost.com
July 25, 2025 at 5:12 PM
Our Head of Impact, Mala Kumar, joined Humanitarian AI Today's André Heller Pérache to discuss Signpost’s research on piloting an “information assistant” for humanitarian response.

Listen to the full episode:
soundcloud.com/humanitarian...
Andre Heller and Mala Kumar Discuss Signpost’s Pilot AI Assistant
Humanitarian AI Today, guest host Mala Kumar, Head of Impact at Humane Intelligence, sits down with Andre Heller, Director of Signpost at the International Rescue Committee (IRC). They discuss Signpos
soundcloud.com
July 23, 2025 at 4:16 PM
We’re proud to have supported the UK Foreign, Commonwealth & Development Office and the Department for Science, Innovation & Technology on a new global report on intimate image abuse, a widespread form of tech-facilitated gender-based violence. assets.publishing.service.gov.uk/media/6878b4...
July 23, 2025 at 4:10 PM
A new resource for responsible AI just dropped—and it matters.
The General-Purpose AI Code of Practice is a voluntary framework to help model developers meet key requirements under the EU AI Act—covering transparency and systemic risk.
digital-strategy.ec.europa.eu/en/policies/...
@ec.europa.eu
July 10, 2025 at 9:22 PM