Alexandra Olteanu
@aolteanu.bsky.social
1.6K followers 430 following 55 posts
Ethical/Responsible AI. Rigor in AI. Opinions my own. Principal Researcher @ Microsoft Research. Grumpy eastern european in north america. Lovingly nitpicky.
Posts Media Videos Starter Packs
Pinned
aolteanu.bsky.social
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
Print screen of the first page of a paper pre-print titled "Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor" by Olteanu et al.  Paper abstract: "In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders."
aolteanu.bsky.social
Love this analogy
olivia.science
People who coax chatbots into sensible answers are basically opening and closing the fridge until it contains something you wanna eat, yes, eventually you get hungrier & eat the stuff in there. But what changed was your cognition. The fridge stayed the same. You changed your mind about the contents.
aolteanu.bsky.social
Perhaps not as much about how real is or is not, but this is a paper that substantially shaped my views on this topic (I have also been surprised at times about how different folks' conceptualizations of reproducibility can be) cs.uwaterloo.ca/~brecht/cour...
cs.uwaterloo.ca
aolteanu.bsky.social
As we prepare the camera-ready version of this paper, I am also reflecting on how to make this work handier and more useful: rigor cards to make the different facets of rigor easier to grasp? workshops to provide a forum for discussion and debates? something else that would be helpful to you?
aolteanu.bsky.social
This was accepted to #NeurIPS 🎉🎊

TL;DR Impoverished notions of rigor can have a formative impact on AI work. We argue for a broader conception of what rigorous work should entail & go beyond methodological issues to include epistemic, normative, conceptual, reporting & interpretative considerations
aolteanu.bsky.social
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
Print screen of the first page of a paper pre-print titled "Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor" by Olteanu et al.  Paper abstract: "In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders."
aolteanu.bsky.social
Not sure if it has what you need or if they are still collecting but this might be worth checking out archive.org/details/twit...
aolteanu.bsky.social
Listening to a workshop panel at #acl2025 I am realizing that we are saying more or less the same things and having more or less the same conversations for so many years
aolteanu.bsky.social
#acl2025 I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find @myra.bsky.social and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
aolteanu.bsky.social
Our FATE MTL team has been working on a series of projects on anthropomorphic AI systems for which we recently put out a few pre-prints I’m excited about. While working on these we tried to think carefully not only about key research questions but also how we study and write about these systems
Reposted by Alexandra Olteanu
melaniemitchell.bsky.social
In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.
aolteanu.bsky.social
Who is attending @aclmeeting.bsky.social in Vienna? Reach out or find me there if you want to chat! #acl2025nlp
Reposted by Alexandra Olteanu
lukestark.bsky.social
My university has announced a fund to essentially poach doctoral students from US institutions. DM me if you do work on the history/social impacts of AI and are interested in being poached 😂
aolteanu.bsky.social
Not sure who needs to hear this but what people want AI systems to do, what AI systems do, and what people believe AI systems do are not the same thing. Just because one wants or believes AI systems do or can do certain things, doesn't mean they actually do those things.
Reposted by Alexandra Olteanu
hannawallach.bsky.social
If you're at @icmlconf.bsky.social this week, come check out our poster on "Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge" presented by the amazing @afedercooper.bsky.social from 11:30am--1:30pm PDT on Weds!!! icml.cc/virtual/2025...
ICML Poster Position: Evaluating Generative AI Systems Is a Social Science Measurement ChallengeICML 2025
icml.cc
aolteanu.bsky.social
tiny but perhaps in their defense Cailler is probably the best chocolate in the world 🍫
Reposted by Alexandra Olteanu
mariaa.bsky.social
Someone asked me today how to get better at scientific writing. I'm not the best person to ask because I find my own writing very inadequate! But the tips I thought of were:

1. Practice, and practice with co-authors who are better writers than you. Observe how they make edits and copy them.

(1/n)
aolteanu.bsky.social
Congrats Koustuv! So well deserved! ❤️
aolteanu.bsky.social
I think the community's ability to look inwards and be self-critical is part of what makes it special, and this is something I believe is important to preserve even when there is disagreement on how to do things or perhaps just different theories of change #facct2025
aolteanu.bsky.social
FAccT is such a special community & many of us have invested a lot of service time/effort to support it over the years. I do believe engaging with uncomfortable questions & dialogue is important even when there is criticism (which can be hard to hear, can feel unfair/demotivating & sucks) #facct2025
Reposted by Alexandra Olteanu
Flattered and shocked for our paper to receive the #facct2025 best paper award.
facct.bsky.social
🏆 Announcing the #FAccT2025 best paper awards! 🏆

Congratulations to all the authors of the three best papers and three honorable mention papers.

Be sure to check out their presentations at the conference next week!

facct-blog.github.io/2025-06-20/b...
Announcing Best Paper Awards
The Best Paper Award Committee was chaired this year by Alex Chouldechova and included six Area Chairs. The committee selected three papers for the Best Paper Award and recognized three additional pap...
facct-blog.github.io
aolteanu.bsky.social
Two years after the craft session on theories of change in responsible AI, I am glad to see this discussion taking central stage as a keynote panel #facct2025
aolteanu.bsky.social
There is a lot of talk and effort to figure out how genAI is different (I am also guilty of this!) -- the reality is that genAI is not that different and genAI is not that new either; it was hard to evaluate in the past, and it is still as hard to evaluate now #facct2025
Reposted by Alexandra Olteanu
asiabiega.bsky.social
Your #FAccT2025 General Chairs @sciorestis.bsky.social, @metaxa.net, and I, reporting from the venue.

We're looking forward to welcoming you to the Athens Conservatoire or online!
aolteanu.bsky.social
That would be awesome! See you soon!