sonyai.bsky.social
@sonyai.bsky.social
Pinned
🚨 Big news: FHIBE is here.
Sony AI’s Fair Human-Centric Image Benchmark, published in Nature, is the first globally diverse, consent-driven dataset to benchmark AI fairness.

Explore the research + dataset: bit.ly/3JKaqnm
#FHIBE #AIethics #SonyAI
Our 2025 Year in Review is here. This year we advanced responsible data practices with #FHIBE, introduced new tools for music and media creation, strengthened sensing and imaging pipelines, and expanded #RL research from #GTSophy to adaptive agents. Read the full recap: bit.ly/3YG9eoN
December 22, 2025 at 7:52 PM
LLM-BRec = faster, more personal recs.
50% less training, 80% less inference, better results.
👉 Read the paper: bit.ly/45c7Q0K
December 16, 2025 at 8:45 PM
Three new studies from #SonyAI explore fairness in music generation. From #unlearning and #attribution to #recognition and #protection.
Creativity should begin, and remain, with people.
Read more: bit.ly/4q9GhNK
#AI #Music #Research
December 15, 2025 at 6:20 PM
Smarter ensembles for translation.
#SmartGen picks the best paths with RL — faster, leaner, better.

👉 Read the full paper: bit.ly/3YmJyNK
December 9, 2025 at 9:27 PM
💫Proud to share that #FHIBE has been featured on the cover of @nature.com. The issue spotlights images from our globally diverse, consent-driven dataset designed to benchmark fairness in AI. Read the full feature via Nature: bit.ly/4pIpcu3
December 5, 2025 at 10:13 PM
Alice Xiang, Global Head of AI Governance at Sony Group Corporation, and Lead Research Scientist at Sony AI, on FHIBE: Global diversity. True consent. Scientific rigor.
Watch A Fair Reflection + explore fairnessbenchmark.ai.sony
#FHIBE #FairAI #AIFairness #EthicalAI #SonyAI
December 4, 2025 at 10:21 PM
Detect dataset use without insider access.

SMI flags if your data trained an LLM or VLM.

👉 Read the full research paper: bit.ly/48fZK9F
December 3, 2025 at 3:10 AM
AI datasets shape how we’re seen. But who gets to decide what’s fair?
Watch “A Fair Reflection,” our short film on the FHIBE dataset—built with consent, representation & mindfulness.
📽️ Visit the site to learn more:
👉 fairnessbenchmark.ai.sony
December 2, 2025 at 6:12 PM
Built with consent, FHIBE is a new benchmark for fairness in vision tasks. 🎬 Our short film A Fair Reflection shows why that matters. Explore the film and benchmark at fairnessbenchmark.ai.sony
#FHIBE #FairnessInAI #EthicalAI
December 1, 2025 at 6:59 PM
What does it take to build a #dataset with #consent, diversity & transparency at its core?
Discover FHIBE—the Fair Human-Centric Image Benchmark—and the insights from the team who built it: bit.ly/3LObF5r
#SonyAI #FHIBE #EthicalAI #ResponsibleAI
November 26, 2025 at 5:36 PM
Meet #FHIBE — just published in @nature.com.
A Fair Human-Centric Image Benchmark built to reflect us all and expose bias in the data behind AI vision systems.
Watch A Fair Reflection → fairnessbenchmark.ai.sony
#AIEthics #FairAI #SonyAI
November 25, 2025 at 7:51 PM
Sony AI’s FHIBE — just published in Nature — sets a new benchmark for evaluating fairness in AI models for computer vision. Hear from Michael Spranger in A Fair Reflection, our short film on FHIBE’s creation → fairnessbenchmark.ai.sony
#FHIBE #EthicalAI #SonyAI
November 25, 2025 at 3:01 AM
🚨 Big news: FHIBE is here.
Sony AI’s Fair Human-Centric Image Benchmark, published in Nature, is the first globally diverse, consent-driven dataset to benchmark AI fairness.

Explore the research + dataset: bit.ly/3JKaqnm
#FHIBE #AIethics #SonyAI
November 19, 2025 at 8:07 PM
Reposted
A paper in Nature presents a database of more than 10,000 human images to evaluate biases in artificial intelligence models for human-centric computer vision, called the Fair Human-Centric Image Benchmark (FHIBE) and was developed by Sony AI. go.nature.com/4qP0MAK 🧪
November 6, 2025 at 2:13 PM