Connor Lawless
@lawlessopt.bsky.social
440 followers 300 following 25 posts
Stanford MS&E Postdoc | Human-Centered AI & OR Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
Posts Media Videos Starter Packs
Reposted by Connor Lawless
lawlessopt.bsky.social
There's been a lot of work using LLMs to formulate MILPs, but how do we know that the formulations are correct?

Come chat with Haotian at poster W-515 to learn about our work on automatic equivalence checking for optimization models!
lawlessopt.bsky.social
Our empirical results highlight that existing pointwise approaches for recourse can fail to catch potential fixed predictions, whereas our approach (provably) succeeds!
lawlessopt.bsky.social
We model the problem as a mixed-integer quadratically constrained program that runs in seconds on real-world datasets.
lawlessopt.bsky.social
This paradigm lets us spot fixed predictions before deploying a model, lets us audit public models for recourse (even if we don't have any available data!), and gives interpretable summaries of regions with fixed predictions to help with debugging.
lawlessopt.bsky.social
In this paper, we introduce a new paradigm for algorithmic recourse that aims to certify recourse over an entire region of the feature space!
lawlessopt.bsky.social
Existing approaches to algorithmic recourse focus on verifying recourse on an individual-by-individual basis, which can cause model developers to miss potential fixed predictions, requires a lot of data, and makes it difficult to debug recourse issues!
lawlessopt.bsky.social
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
lawlessopt.bsky.social
Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!

🕐 Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
📍East Exhibition Hall A-B #E-1104
🔗 arxiv.org/abs/2502.16380
Understanding Fixed Predictions via Confined Regions
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Existing approaches to audit fixed predictions do so on a pointwise basis, which requires ac...
arxiv.org
Reposted by Connor Lawless
dransyhe.bsky.social
Our ✨spotlight paper✨ "Primal-Dual Neural Algorithmic Reasoning" is coming to #ICML2025!

We bring Neural Algorithmic Reasoning (NAR) to the NP-hard frontier 💥

🗓 Poster session: Tuesday 11:00–13:30
📍 East Exhibition Hall A-B, # E-3003
🔗 openreview.net/pdf?id=iBpkz...

🧵
lawlessopt.bsky.social
This is my first time at an HCI conference - come say hi if you're around!
lawlessopt.bsky.social
In addition to a bunch of quantitative experiments, we ran a user study with a prototype system to inform design recommendations for future interactive optimization systems. Check out the paper for more details!
lawlessopt.bsky.social
We built a hybrid LLM and CP system that uses LLMs to translate user requests in chat into operations on an underlying CP optimization model to schedule a new meeting. This gets the best of both worlds - the flexibility of LLMs with the decision making power of optimization!
lawlessopt.bsky.social
Building optimization models in practice involves a ton of back and forth between optimization and domain experts to understand a decision making problem. Can we enable domain experts to craft their own optimization models instead? We study this through the lens of scheduling.
lawlessopt.bsky.social
In case youre wondering why this thread looks suspiciously like a bunch of screenshots from a presentation...

I'll be chatting about this project at the INFORMs Computing Society Conference in the debate room at 3. Come say hi!
lawlessopt.bsky.social
More broadly, this is a first step towards a new paradigm where we can exploit natural language information to do better algorithm configuration and design! There's tons of exciting open problems towards this goal (reach out if you're interested!).
lawlessopt.bsky.social
Surprisingly, we can get high performing configurations from our framework - outperforming solver defaults on a number of real world problems, without solving a single MILP!
lawlessopt.bsky.social
We introduce a LLM based framework with some algorithmic bells and whistles (ensembling, solver specific context...) to capitalize on LLM strengths while addressing these challenges.
lawlessopt.bsky.social
Unfortunately, LLMs aren't a natural fit for configuration. Parameters are problem specific, LLMs have stochastic outputs, and frankly - it's a tough problem!
lawlessopt.bsky.social
Can we get better problem-specific solver configurations without the big computational price tag?

In this paper we show that we can thanks to Large Language Models! Why LLMs? They can identify useful optimization structure and have a lot of built in math programming knowledge!
lawlessopt.bsky.social
MILP solvers ship with a ton of parameters that can have a massive impact on solver performance (over 70% for separator configuration alone!), but are notoriously difficult to set.

Existing approaches for algorithm configuration require solving a ton of MILPs leading to days of compute.
Reposted by Connor Lawless
karenhao.bsky.social
For decades, the US government has painstakingly kept American science #1 globally—and every facet of American life has improved because of it. The internet? Flu shot? Ozempic? All grew out of federally-funded research. Now all that's being dismantled. 1/ www.technologyreview.com/2025/02/21/1...
The foundations of America’s prosperity are being dismantled
Federal scientists warn that Americans could feel the effects of the new administration's devastating cuts for decades to come
www.technologyreview.com