Stanford HAI
@stanfordhai.bsky.social
2.5K followers 110 following 160 posts
The official account of the Stanford Institute for Human-Centered AI, advancing AI research, education, policy, and practice to improve the human condition.
Posts Media Videos Starter Packs
Pinned
stanfordhai.bsky.social
📣 Announcing the AI for Organizations Grand Challenge, a new competition for scholars to help organizations enter the era of AI. GoogleDeep Mind and @stanfordhai.bsky.social invite researchers from any university worldwide to submit your boldest ideas. Learn more: hai.stanford.edu/aiogc
stanfordhai.bsky.social
Can AI generate new DNA and show us how our genomes interact at a molecular level? The results could reveal novel insights in biology and pave the way for personalized medicine. Meet the scholars behind EVO 2 at the Hoffman-Yee Symposium on Oct. 14. hai.stanford.edu/events/hoffm...
stanfordhai.bsky.social
What if an AI model could predict your risk and progression of Alzheimer’s or Parkinson's? Scholars are building a world model of the brain that could create better predictions for diagnosis and care. Learn more at the Hoffman-Yee Symposium on Oct. 14: hai.stanford.edu/events/hoffm...
stanfordhai.bsky.social
In collaboration with @stanforddata.bsky.social, we kicked off our fall seminar series with HAI faculty affiliate @brianhie.bsky.social. He presented Evo 2, an open-source tool that can predict the form and function of proteins in the DNA of all domains of life. 🧬 hai.stanford.edu/events/brian...
stanfordhai.bsky.social
📸 Early-career workers in AI-exposed roles faced a 13% drop in employment after generative AI adoption, according to research by Bharat Chandar, Ruyu Chen, and @erikbryn.bsky.social presented at today's Digital Economy Lab seminar. Read the paper here: digitaleconomy.stanford.edu/publications...
stanfordhai.bsky.social
📣 NEW: How can we validate claims about AI? Many AI companies often base their testing on specific tasks but overstate their overall capabilities. Our latest policy brief presents a three-step validation framework for separating legitimate from unsupported claims. hai.stanford.edu/policy/valid...
stanfordhai.bsky.social
“When only a few have the resources to build and benefit from AI, we leave the rest of the world waiting at the door,” said
@stanfordhai.bsky.social Senior Fellow @yejinchoinka.bsky.social during her address to the UN Security Council. Read her full speech here: hai.stanford.edu/policy/yejin...
stanfordhai.bsky.social
How do educators decide whether to use AI tools or not? Stanford researchers gathered 60+ K-12 math educators nationwide to understand their AI needs and perspectives and to inform better design for ed tech tools.

Here are their findings: hai.stanford.edu/news/how-mat...
How Math Teachers Are Making Decisions About Using AI | Stanford HAI
A Stanford summit explored how K-12 educators are selecting, adapting, and critiquing AI tools for effective learning.
hai.stanford.edu
Reposted by Stanford HAI
rbaltman.bsky.social
AI is revolutionizing drug discovery and opening doors to novel treatments. I spoke with Jim Weatherall about how @AstraZeneca and @Stanford University School of Medicine are collaborating to blend the strengths of industry and academia. Tune in: www.science.org/content/webi...
AI meets medicine: How academic–industry alliances are accelerating drug discovery
www.science.org
stanfordhai.bsky.social
Many teachers are concerned about AI getting in the way of learning, but a far more dangerous trend is happening: kids using “undress” apps to create deepfake nudes of their peers. @riana.bsky.social studies the impact of AI-generated child sexual abuse: hai.stanford.edu/news/how-do-...
How Do We Protect Children in the Age of AI? | Stanford HAI
Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.
hai.stanford.edu
stanfordhai.bsky.social
Can we achieve political neutrality in AI? Our latest brief argues that while true neutrality is not technically possible, there are ways to approximate it. We introduce a framework of 8 techniques for approximating political neutrality in AI models: hai.stanford.edu/policy/towar...
Toward Political Neutrality in AI | Stanford HAI
This brief introduces a framework of eight techniques for approximating political neutrality in AI models.
hai.stanford.edu
stanfordhai.bsky.social
HAI Senior Research Scholar and Policy Fellow Rishi Bommasani is working across disciplines to address complex questions around AI governance. Recently, he authored a Science paper, joining scholars in setting out a vision for evidence-based AI policy. hai.stanford.edu/news/fosteri...
Fostering Effective Policy for a Brave New AI World: A Conversation with Rishi Bommasani | Stanford HAI
The senior research scholar and policy fellow is working across disciplines to address complex questions around AI governance.
hai.stanford.edu
stanfordhai.bsky.social
📸 @stanfordhai.bsky.social experts at today's "The Next Revolution of AI: Impact Summit" urge us to guide AI’s future with reasoned optimism and resilience to benefit society and future generations. The event brings together top minds to discuss AI’s next wave in science, industry & beyond.
stanfordhai.bsky.social
Google DeepMind and @stanfordhai.bsky.social scholars @mavelous-mav.bsky.social and @mbernst.bsky.social invite academic researchers to enter the AI for Organizations Grand Challenge. Help us find the best ideas for shaping the future of collaboration in the workplace: hai.stanford.edu/aiogc
stanfordhai.bsky.social
📸 Today from DC: @stanfordhai.bsky.social Faculty Affiliate Michelle Mello testified in Congress on AI in healthcare. To boost AI adoption, she discussed policy changes that could build trust in AI's performance. Read her testimony here: bit.ly/4ngI56k
stanfordhai.bsky.social
📢 New policy brief: Medicare Advantage enrolls more than half of all Medicare beneficiaries. Our latest brief introduces two algorithms that can promote fairer Medicare Advantage spending for minority populations. hai.stanford.edu/policy/incre...
stanfordhai.bsky.social
Celebrating our exceptional women leaders! 👏

Congratulations to our Founding Co-Director Fei-Fei Li and Senior Fellow Yejin Choi on being recognized in this year’s #TIME100AI Shapers and Thinkers list!

Their work and perspectives are featured here:
time.com/collections/...
time.com/collections/...
stanfordhai.bsky.social
Stanford HAI and Google DeepMind invite scholars to submit bold ideas on how AI can improve and reimagine organizations. Join our virtual Q&A tomorrow to learn more about the AI for Organizations Grand Challenge:

📅 9 am PT: bit.ly/45DiheH
📅 6 pm PT: bit.ly/3HwsxfD
stanfordhai.bsky.social
In 2023, AI companies made commitments toward AI safety. New research shows only half of them are being followed. As policymakers debate voluntary vs. mandatory rules, is voluntary still the way to go? HAI scholar Rishi Bommasani was quoted in this article: www.fastcompany.com/91389117/bid...
Biden-era AI safety promises aren’t holding up, and Apple’s the weakest link
Only half of the voluntary commitments on AI made in 2023 made by 16 large AI companies are being followed, a new analysis suggests.
www.fastcompany.com
stanfordhai.bsky.social
At @stanfordhai.bsky.social, we believe that innovation thrives at the intersection of disciplines. Through our Postdoctoral Fellowship program, we support promising Stanford researchers whose work transcends traditional academic boundaries. hai.stanford.edu/news/stanfor...
stanfordhai.bsky.social
Scholars at @stanfordhai.bsky.social and @acceleratelearning.bsky.social issued a response to the U.S. Department of Education's request for information on advancing AI in education. Read their call for anchoring the approach in proven research here: hai.stanford.edu/policy/respo...
stanfordhai.bsky.social
RadGPT helps patients understand their radiology reports. “We hope that our technology won’t just help to explain the results, but will also help to improve the communication between doctor and patient,” said @curtlanglotz.bsky.social, senior author of the study. hai.stanford.edu/news/new-lar...
New Large Language Model Helps Patients Understand Their Radiology Reports | Stanford HAI
‘RadGPT’ cuts through medical jargon to answer common patient questions.
hai.stanford.edu