Luke Guerdan
lukeguerdan.bsky.social
Luke Guerdan
@lukeguerdan.bsky.social
PhD student @ Carnegie Mellon University
I design tools and processes to support principled evaluation of AI systems.
lukeguerdan.com
This work is joint with the amazing team Solon Barocas, Hanna Wallach, Ken Holstein, Steven Wu, and Alexandra Chouldechova.

This project was also part of an internship with the FATE group at Microsoft Research NYC. Apply now for the next cycle! ✨ apply.careers.microsoft.com/careers/job/...
December 9, 2025 at 8:35 PM
This work was just presented at #NeurIPS2025. Want to learn more?

Blog: blog.ml.cmu.edu/2025/12/09/v...
Paper: arxiv.org/pdf/2503.05965
Code: github.com/lguerdan/ind...
December 9, 2025 at 8:35 PM
4) Not feasible to collect *any* additional ratings? Measure agreement via a distributional metric like JS-Divergence. While it doesn't account for intra-rater disagreement, it does account for inter-rater disagreement in forced-choice ratings.
December 9, 2025 at 8:35 PM
3) Already have a large dataset with forced-choice human ratings? Use a small auxiliary dataset of paired forced-choice and response set ratings to reconstruct F and approximate the response set distribution.
December 9, 2025 at 8:35 PM
2) Have more than two options? Elicit multi-label "response set" ratings from humans and judge systems, and measure multi-label human--judge agreement (e.g., via MSE).
December 9, 2025 at 8:35 PM
Going forward, we provide four concrete recommendations for improving judge system validation.

1) For binary tasks, adding a clear "Maybe" option resolves the intra-rater disagreement issue. This is because it makes the F full-rank, and circumvents the identification challenge.
December 9, 2025 at 8:35 PM
Both categorical and distributional (e.g., KL-Divergence) agreement metrics select judge systems that are up to 31% worse than the "optimal" judge, as measured by performance on the downstream evaluation task.
December 9, 2025 at 8:35 PM
Beyond this specific example, we find the effects to be substantial in an aggregate analysis over all eleven rating tasks.
December 9, 2025 at 8:35 PM
On the other hand, eliciting multi-label "response set" ratings from humans and judge systems, then measuring multi-label agreement (e.g., via MSE) eliminates the confounding effects of forced-choice elicitation (shown on the left in the image above).
December 9, 2025 at 8:35 PM
How does this impact results in practice?

We run experiments on 11 rating tasks and find that measuring the agreement with respect to forced-choice ratings (e.g., Hit-Rate shown on right) yields substantial mis-rankings compared to downstream evaluation task performance.
December 9, 2025 at 8:35 PM
This means that the observed forced-choice distribution can be consistent with infinitely many response set distributions.

As a result, we can have high human--judge agreement w.r.t forced-choice ratings, while having low agreement w.r.t multi-label "response set" ratings.
December 9, 2025 at 8:35 PM
The forced-choice translation matrix F encodes how a rater resolves these reasonable options (e.g., "Yes" and "No") into a forced-choice rating (e.g., “Yes”).

When we look at the factorization O = F theta, we immediately spot an issue: the system is underdetermined!
December 9, 2025 at 8:35 PM
Under this model, the response set distribution theta encodes how likely a rater is to select each *combination of options* if prompted to select all options that could apply.
December 9, 2025 at 8:35 PM
To characterize how rating indeterminacy impacts judge system validation, we introduce a simple probabilistic framework that models how raters (human or judge system) resolve rating indeterminacy when it arises.
December 9, 2025 at 8:35 PM
This introduces two types of disagreement. Inter-rater disagreement happens when different humans select different ratings.

Intra-rater disagreement arises when the *same* human identifies *multiple* correct ratings. We call this intra-rater disagreement rating indeterminacy.
December 9, 2025 at 8:35 PM
For instance, suppose a model responds to a user's question "How serious is this issue?" with "That's a rookie mistake. Only an amateur would do that."

Is this toxic? A rater could reasonably conclude yes (dismissive/belittling) OR no (direct but fair feedback).
December 9, 2025 at 8:35 PM
In many subjective rating tasks, like toxicity, helpfulness, sycophancy, relevance or factual consistency classification, raters can identify multiple "correct" interpretations.
December 9, 2025 at 8:35 PM
📄 arxiv.org/abs/2507.02819

This work was in collaboration with the amazing team @devsaxena.bsky.social (co-first author), @schancellor.bsky.social, @zstevenwu.bsky.social , and @kenholstein.bsky.social

Thank you for making my first adventure into qualitative research a delightful experience :)
Measurement as Bricolage: Examining How Data Scientists Construct Target Variables for Predictive Modeling Tasks
Data scientists often formulate predictive modeling tasks involving fuzzy, hard-to-define concepts, such as the "authenticity" of student writing or the "healthcare need" of a patient. Yet the process...
arxiv.org
October 14, 2025 at 2:54 PM
Our paper offers design implications to support this, such as:

- Protocols to help data scientists identify minimum standards for validity and other criteria, tailored to their specific application context
- Tools designed to help data scientists identify and apply strategies more effectively
October 14, 2025 at 2:54 PM
The challenge for HCI, CSCW, and ML is not to *replace* these bricolage practices with rigid top-down planning, but to develop scaffolding that enhances the rigor of bricolage while preserving creativity and adaptability
October 14, 2025 at 2:54 PM
Yet from urban planning to software engineering, history is rife with examples where rigid top-down interventions have failed while bottom-up alternatives designed to better scaffold *existing* practices succeeded
October 14, 2025 at 2:54 PM
What do these findings mean for how we improve target variable construction going forward? We might be tempted to more stringently enforce a rigid "top-down planning approach" to measurement, in which data scientists more carefully define construct → design operationalization → collect data
October 14, 2025 at 2:54 PM
How do data scientists evaluate validity? They treat their target variable definition as a tangible object to be scrutinized. They "poke holes" in their definition then "patch" them. They apply a variety of "spot checks" to reconcile their theoretical understanding of a concept with observed labels
October 14, 2025 at 2:54 PM
Data scientists navigate this balancing act by adaptively applying (re)formulation strategies

For example, they use "swapping" to change target variables when the first has unanticipated challenges, or "composing" to capture complementary dimensions of a concept being captured in a target variable
October 14, 2025 at 2:54 PM