📄 arxiv.org/abs/2409.18472
📄 arxiv.org/abs/2409.18472
Subscribe our Youtube channel for daily highlights during the conference: www.youtube.com/@WomeninAIRe...
#neurips2025 #wiair #wiairpodcast
Subscribe our Youtube channel for daily highlights during the conference: www.youtube.com/@WomeninAIRe...
#neurips2025 #wiair #wiairpodcast
Not about food, but about care. 💛
Dr. Annie Lee explains why LLMs often miss these cultural meanings - and why multilingual AI needs more than translation. More in the full episode!
#wiair #wiairpodcast
Not about food, but about care. 💛
Dr. Annie Lee explains why LLMs often miss these cultural meanings - and why multilingual AI needs more than translation. More in the full episode!
#wiair #wiairpodcast
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social) about multilingual & multicultural AI — the language gap, missing benchmarks, and why domain-specific data matters.
#wiairpodcast
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social) about multilingual & multicultural AI — the language gap, missing benchmarks, and why domain-specific data matters.
#wiairpodcast
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social & @utoronto.ca) about multilingual AI, inclusion in research - and proving you can build an amazing career while raising a family.
#wiairpodcast
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social & @utoronto.ca) about multilingual AI, inclusion in research - and proving you can build an amazing career while raising a family.
#wiairpodcast
Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.
#llms #wiair #wiairpodcast
Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.
#llms #wiair #wiairpodcast
We hosted Dr. Vered Shwartz on WiAIR to discuss how culture shapes AI’s understanding of language & visuals. We also discussed an EMNLP 2024 paper examining multicultural understanding in VLMs.
(1/8🧵)
We hosted Dr. Vered Shwartz on WiAIR to discuss how culture shapes AI’s understanding of language & visuals. We also discussed an EMNLP 2024 paper examining multicultural understanding in VLMs.
(1/8🧵)
KSoC: utah.peopleadmin.com/postings/190... (AI broadly)
Education + AI:
- utah.peopleadmin.com/postings/189...
- utah.peopleadmin.com/postings/190...
Computer Vision:
- utah.peopleadmin.com/postings/183...
KSoC: utah.peopleadmin.com/postings/190... (AI broadly)
Education + AI:
- utah.peopleadmin.com/postings/189...
- utah.peopleadmin.com/postings/190...
Computer Vision:
- utah.peopleadmin.com/postings/183...
In “Locating Information Gaps and Narrative Inconsistencies Across Languages”, Dr. Vered Shwartz (@veredshwartz.bsky.social) and collaborators introduce INFOGAP, a method to detect fact-level gaps across Wikipedias. (1/6🧵)
In “Locating Information Gaps and Narrative Inconsistencies Across Languages”, Dr. Vered Shwartz (@veredshwartz.bsky.social) and collaborators introduce INFOGAP, a method to detect fact-level gaps across Wikipedias. (1/6🧵)
(1/7🧵)
science.ubc.ca/news/2025-10...
(1/7🧵)
We want AI systems that understand diverse cultures 𝘢𝘯𝘥 stay grounded in factual truth.
But can we really have both?
Vered Shwartz explains this core challenge of modern LLMs.
#llms #wiair #wiairpodcast
We want AI systems that understand diverse cultures 𝘢𝘯𝘥 stay grounded in factual truth.
But can we really have both?
Vered Shwartz explains this core challenge of modern LLMs.
#llms #wiair #wiairpodcast
It's currently on Audible:
www.audible.ca/pd/B0FXY8VQX5
Stay tuned (lostinautomatictranslation.com) for more retailers, including Amazon, iTunes, etc., and public libraries! 📚
It's currently on Audible:
www.audible.ca/pd/B0FXY8VQX5
Stay tuned (lostinautomatictranslation.com) for more retailers, including Amazon, iTunes, etc., and public libraries! 📚
We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI — cultural bias. 🌍
#nlproc #wiair #wiairpodcast
/1
We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI — cultural bias. 🌍
#nlproc #wiair #wiairpodcast
/1
In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.
🎧 Watch here
👉 www.youtube.com/watch?v=9x2Q...
#wiair
In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.
🎧 Watch here
👉 www.youtube.com/watch?v=9x2Q...
#wiair
Trust isn't about certainty - it's about risk acceptance.
Full conversation: youtu.be/xYb6uokKKOo
Trust isn't about certainty - it's about risk acceptance.
Full conversation: youtu.be/xYb6uokKKOo
In “What Has Been Lost with Synthetic Evaluation”, Ana Marasović (@anamarasovic.bsky.social) and collaborators ask what happens when LLMs start generating the datasets used to test their reasoning. (1/6🧵)
In “What Has Been Lost with Synthetic Evaluation”, Ana Marasović (@anamarasovic.bsky.social) and collaborators ask what happens when LLMs start generating the datasets used to test their reasoning. (1/6🧵)
As Ana Marasović says, innovation flows both ways: research trains the next generation who power real-world AI.
🎓🤖 www.youtube.com/@WomeninAIRe...
As Ana Marasović says, innovation flows both ways: research trains the next generation who power real-world AI.
🎓🤖 www.youtube.com/@WomeninAIRe...
This week on #WiAIRpodcast, we talk with Ana Marasović (@anamarasovic.bsky.social) about her paper “Chain-of-Thought Unfaithfulness as Disguised Accuracy.” (1/6🧵)
📄 Paper: arxiv.org/pdf/2402.14897
This week on #WiAIRpodcast, we talk with Ana Marasović (@anamarasovic.bsky.social) about her paper “Chain-of-Thought Unfaithfulness as Disguised Accuracy.” (1/6🧵)
📄 Paper: arxiv.org/pdf/2402.14897
Can AI ever be as safely regulated as aviation?
Ana Marasović shares her vision for the future of AI governance — where safety principles and regulation become the default, not an afterthought.
www.youtube.com/@WomeninAIRe...
Can AI ever be as safely regulated as aviation?
Ana Marasović shares her vision for the future of AI governance — where safety principles and regulation become the default, not an afterthought.
www.youtube.com/@WomeninAIRe...
In this week’s #WiAIRpodcast, we talk with Ana Marasović (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and human–AI collaboration. (1/8🧵)
In this week’s #WiAIRpodcast, we talk with Ana Marasović (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and human–AI collaboration. (1/8🧵)
This time, we sit down with @anamarasovic.bsky.social to unpack some of the toughest questions in AI explainability and trust.
🔗 Watch here → youtu.be/xYb6uokKKOo
This time, we sit down with @anamarasovic.bsky.social to unpack some of the toughest questions in AI explainability and trust.
🔗 Watch here → youtu.be/xYb6uokKKOo