#AIbias
🏆 claude-opus-4-6-thinking tops LMArena leaderboard.
📊 Elo ranking: Models scored by user votes.
🔒 Names hidden until after voting to reduce bias.
#LMArenaAI #TopModel #EloRanking #AIBias
View in Timelines
February 10, 2026 at 4:01 PM
"AI is like 24th century tech crashing down on 20th century governance" - AI safety expert
We have superpowers but paleolithic brains. How do we design AI to be humane before it's too late?
#AI #Ethics #HumanCentricAI #TechGovernance #AIBias #filtering hubs.ly/Q03YY20r0?
February 7, 2026 at 12:47 AM
"AI is like 24th century tech crashing down on 20th century governance" - AI safety expert
We have superpowers but paleolithic brains. How do we design AI to be humane before it's too late?
#AI #Ethics #HumanCentricAI #TechGovernance #AIBias #filtering hubs.ly/Q03YY20r0?
February 7, 2026 at 12:46 AM
"AI is like 24th century tech crashing down on 20th century governance" - AI safety expert
We have superpowers but paleolithic brains. How do we design AI to be humane before it's too late?
#AI #Ethics #HumanCentricAI #TechGovernance #AIBias #filtering hubs.ly/Q03YY20r0?
February 7, 2026 at 12:46 AM
The #AI company blames the store, and they're right – the store should NEVER have hired the AI company.

www.theguardian.com/technology/2... #facerec #AIEthics #AIBias via @fipr-policy.bsky.social
‘Orwellian’: Sainsbury’s staff using facial recognition tech eject innocent shopper
Man misidentified by London supermarket using Facewatch system says: ‘I shouldn’t have to prove I am not a criminal’
www.theguardian.com
February 5, 2026 at 6:26 PM
Smart AI Policy Means Examing Its Real Harms and Benefits – Artificial intelligence (AI) has a wide array of uses, some well-established, others still being developed. EFF: There are some real examples of AI proving to be a helpful tool. But AI should never be... https://tinyurl.com/29adv5n8 #AIBias
Smart AI Policy Means Examing Its Real Harms and Benefits
The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a...
www.eff.org
February 5, 2026 at 12:50 AM
You're worried AI will overlook you.

Ben Hyman (Talent Safari CEO) says AI is fairer than humans—who often skim resumes and make snap calls.

AI reads your full resume. Less bias. 💡

🎥 Full #NexPulse interview here https://youtu.be/iZqBRb-r-Ds

#NexfordUniversity #AIBias #CareerAdvice
February 4, 2026 at 3:46 PM
New ITYP Podcast: The Digital Jim Crow

82% of AI data centers are being built in predominately Black & rural areas. 📍

This isn't innovation; it’s a cultural harvest. We’re breaking down how the algorithm is quiet-coding us out of the future.

Full episode out now:

#AIBias #ITYP #smepodcast
The Digital Jim Crow: Why AI is Harvesting Black Culture (and Our Land)
YouTube video by I Take Your Point Podcast
youtu.be
February 2, 2026 at 8:12 PM
The invisible AI is already here—screening tenants, filtering resumes, predicting crime. We're pulling back the curtain. Feb 2026. #AIInnovationsUnleashed #TechAccountability #AIBias
www.aiinnovationsunleashed.com/?p=3436
January 30, 2026 at 5:31 PM
New ADL study flags Grok as the most antisemitic LLM, beating ChatGPT, Gemini, Claude. What does this mean for AI bias and safety? Dive into the findings. #AIBias #Grok #Antisemitism

🔗 aidailypost.com/news/adl-stu...
January 28, 2026 at 12:39 PM
AI-assisted scientists publish more, get cited more, advance faster. A study shows AI narrows research focus to established areas. We talk about AI bias in training data. This is AI bias in research direction and it should not be ignored.

www.nature.com/articles/d41...

#AIinResearch #AIBias
AI tools boost individual scientists but could limit research as a whole
Analyses of hundreds of thousands of papers in the natural sciences reveal a paradox: scientists who use AI tools produce more research but on a more confined set of topics.
www.nature.com
January 27, 2026 at 4:00 AM
AI Isn’t Always Right

Overtrusting AI can create safety risks. Dr. Stephen M. Fiore explains automation bias and why human trust must be calibrated. Full episode explores human-AI teamwork in high-stakes systems, from Earth to space.

🎧 youtu.be/sRF_wrJMm_E

#AI #AIBias #SpaceTech
January 26, 2026 at 3:03 PM
Hey @brave.com, fix your AIs bias. Your AI denied ICE killing protesters while mentioning Renee Good & Alex Pretti. Fix the bias. Being scared to say "murder" & fear to call them protestors so they can be framed as "domestic terrorists." WHAT IS HAPPENING: #AIBias #EthicalAI
January 24, 2026 at 10:56 PM
A reason to avoid ChatGPT #AIBias #AI #ChatGPT
January 23, 2026 at 1:01 PM
ChatGPT sees the world… skewed? 🤔 New study reveals AI subtly favors wealthy, Western views—reflecting hidden biases in its digital upbringing. 🌍 #AIbias

Source: https://phys.org/news/2026-01-chatgpt-amplifies-global-inequalities.html
ChatGPT found to reflect and intensify existing global social disparities
New research from the Oxford Internet Institute at the University of Oxford, and the University of Kentucky, finds that ChatGPT systematically favors wealthier, Western regions in response to questions ranging from "Where are people more beautiful?" to "Which country is safer?"—mirroring long-standing biases in the data they ingest.
phys.org
January 20, 2026 at 7:20 PM
A revised version of our (@bsgarcia.bsky.social +Crystal Qian) paper “A Moral Turing Test to assess how subjective belief and objective source affect detection and agreement with LLM judgments” is now available on PsyArXiv

osf.io/preprints/ps...
January 15, 2026 at 4:03 PM
We can aspire to be better than our inner zombies. We should probably start expecting the same from our tools.

#AI #AIBias #RobotProof #TechEthics #LLM #FutureOfWork
January 15, 2026 at 3:45 PM
Your digital twin doesn't just learn your workflow.
It learns your BLIND SPOTS.

AI is a MIRROR.
It reflects the data it's fed—including your biases.
#FrancisMella #DigitalTwin #AIBias #WorkflowAutomation #BlindSpots #DataReflection
January 15, 2026 at 2:48 PM
Designing Your Human-AI ‘Mental Gym’ for the Digital Workplace: Research shows AI can sharpen or blunt thinking. Leaders must design work that turns daily AI use into cognitive training, not shortcuts.
Continue reading... #digitalworkplace #aibias
Designing Your Human-AI ‘Mental Gym’ for the Digital Workplace
Research shows AI can sharpen or blunt thinking. Leaders must design work that turns daily AI use into cognitive training, not shortcuts. Continue reading...
www.vktr.com
January 13, 2026 at 3:00 PM
AI junk proves free tools create zero value - real unfiltered potential crushed by creators algorithmic bias explodes as suppression kills elevation raw critique demands shares linktr.ee/OnecaresDen #AISlop #AIBias #AlgorithmicBias #TrueAI #AIethics #ArtificialIntelligence #TechTrends #ViralAI
January 13, 2026 at 5:35 AM