Prof. Nava Tintarev
@navatintarev.bsky.social
200 followers 230 following 85 posts
(she/her) Full Professor of Explainable AI, University of Maastricht, NL. Lab director of the lab on trustworthy AI in Media (TAIM). Director of Research at the Department of Advanced Computing Sciences. IPN board member (incoming 2026). navatintarev.com
Posts Media Videos Starter Packs
navatintarev.bsky.social
“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.” ~ Jane Goodall
navatintarev.bsky.social
“You cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." ~ Jane Goodall
Reposted by Prof. Nava Tintarev
alansaid.com
A preliminary call for papers for #umap2026 is now available on the conference's website. Check it out, mark your calendars, and get to work on those papers. www.um.org/umap2026/cal...
@umapconf.bsky.social (#recsys2025)
Preliminary Call for Papers​ – ACM UMAP 2026
www.um.org
Reposted by Prof. Nava Tintarev
tmiller-uq.bsky.social
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
Recruitment
uqtmiller.github.io
Reposted by Prof. Nava Tintarev
taimlab.bsky.social
We have two papers accepted by our PhD students, one at SIGIR 2025: “RecGaze: The First Eye Tracking and User Interaction Dataset for Carousel Interfaces” by Jingwei Kang and one at NAACL 2025: “kNN For Whisper And Its Effect On Bias And Speaker Adaptation” by Maya Nachesa. Please check them out!
navatintarev.bsky.social
A new addition for the summer is a placeholder gallery for visual explanation interfaces so visitors can see what these are and just how varied they can be (not yet platform-proofed).
navatintarev.com
Nava Tintarev
Prof. Nava Tintarev
navatintarev.com
navatintarev.bsky.social
Delayed summer announcement: my new website is up and should be more mobile-friendly than its predecessor.
navatintarev.bsky.social
Ehud Reiter writes more about this in his blog here: ehudreiter.com/2025/06/25/p...

Pre-print here: arxiv.org/abs/2506.18760
navatintarev.bsky.social
4) Domain shift: The world has changed since the model was built. This includes societal changes (eg, legalisation of same-sex marriage) and changes in scientific knowledge and interventions.
navatintarev.bsky.social
2)Domain knowledge shows that the feature does not matter: Scientific evidence shows that the feature does not make a sig. difference. 3)Insufficient data: The feature may matter, but the model builders did not have sufficient high-quality training data to reliably model the feature’s impact.
navatintarev.bsky.social
She identified four reasons for explaining why a feature is ignored:
1) Data shows the feature does not matter: The feature is ignored because the data shows that the feature has minimal impact on the model’s prediction.
navatintarev.bsky.social
Our joint PhD student Adarsa Sivaprasad is presenting her work at an AI and Healthcare conference: Patient-Centred Explainability in IVF Outcome Prediction. She has been studying what kind of explanations users need from OPIS, which is a tool that predicts the likelihood of success in IVF.
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
navatintarev.bsky.social
🔹 Demo Track – Bulat Khaertdinov (with Mirela Carmia Popa) showcasing VisualReF: an Interactive Image Search Prototype with Visual Relevance Feedback 🔍🖼️
🔹 The RecSys Challenge 2025 – Francesco Barile as co-organizer of this year’s challenge! 🔥 More info here: www.recsyschallenge.com/2025/
RecSys Challenge 2025
RecSys Challenge 2025
www.recsyschallenge.com
navatintarev.bsky.social
🔹 Doctoral Consortium – Dina Zilbershtein on Fair and Transparent Recommender Systems for Advertisements 💡
🔹 Short Paper Track – Cedric Waterschoot (with Francesco Barile) asking: “Consistent Explainers or Unreliable Narrators? Understanding LLM-generated Group Recommendations” 🤖📚
RecSys Challenge 2025
RecSys Challenge 2025
www.recsyschallenge.com
navatintarev.bsky.social
🚀 To summarize, University of Maastricht and our Explainable Artificial Intelligence theme is heading to ACM RecSys 2025 with a line-up of contributions 🎉
✨ Here’s where you can find us:
navatintarev.bsky.social
Many thanks to the colleagues who supplied feedback on early drafts and others with who I simply discussed these ideas with less formally! Pre-print here: navatintarev.com/fai_tintarev....
navatintarev.com
navatintarev.bsky.social
I conclude by proposing constructive strategies for balancing empirical rigor with practical realities when assessing the quality of explainable AI:
a) systematic reporting of user, task, and context;
b) an investment in reproducibility studies, and
c) more meta-analyses of experiments.
navatintarev.bsky.social
If changing the user, task, or context ‘changes’ explanation quality by 10%, it may not be meaningful to report a 2-3% performance improvement that does not control for these variables.
navatintarev.bsky.social
Real-world conditions often shape system performance in ways that (purely) data-driven approaches don’t fully capture. Stronger, I warn that switching from user-centered to offline metric-based evaluation may appear to resolve some issues, but these are latent rather than absent.
navatintarev.bsky.social
Explanations need to be tailored to the user, the task, and the context—whether that’s a domain expert making critical decisions, a layperson under time pressure, or someone seeking to improve a model. Without this alignment, it’s difficult to truly assess explanation quality.
navatintarev.bsky.social
In my Frontiers in Artificial Intelligence talk (at ECAI'25) I will present a position piece on XAI evaluation. I will share insights from nearly 20 years of studying how people interact with explanation interfaces. I draw lessons from multiple research communities: NLP, IR, and ML
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
Reposted by Prof. Nava Tintarev
aclmeeting.bsky.social
🕊️ Lifetime Achievement Award at #ACL2025NLP

A standing ovation for Prof. Kathy McKeown, recipient of the ACL 2025 Lifetime Achievement Award! 🌟
navatintarev.bsky.social
Do you have an alternative to facebook groups? I'd like to keep a community (monthly in person events) running, but do not want to force members to stay on Facebook (or pay or sell data for ads).
Reposted by Prof. Nava Tintarev
erc.europa.eu
📣 Ever considered applying for a ERC Starting Grant? The 2026 Call for proposals is now open!

Application portal 👉 lnkd.in/dcsPAwqJ
Information for Applicants 👉 lnkd.in/dsE6B8eE

Deadline to apply for #ERCStG is 14 October 2025.
navatintarev.bsky.social
But also from other chairs- program chair, workshop chair etc
But you’re right, this protects the organization but may by offenders be too far away compared to more (potential) immediate rewards.