Austin Tripp
@austinjtripp.bsky.social
1.4K followers 230 following 32 posts
(ML ∪ Bayesian optimization ∪ active learning) ∩ (drug discovery) Researcher @valenceai.bsky.social Details: austintripp.ca
Posts Media Videos Starter Packs
austinjtripp.bsky.social
(I wrote this because this was something I inferred over time, and I think it's helpful to new reviewers to explain acceptance criteria more explicitly)
austinjtripp.bsky.social
That is, a paper should provide either a unique result or a unique idea (ideally both), and on top of that should have no correctness issues.

Full post is here: www.austintripp.ca/blog/2025-06...

Happy to hear comments/feedback! I know my approach is just 1 of many!
austinjtripp.bsky.social
NeurIPS reviews are due in 1 week 😱

If it's your first time reviewing (or if you don't feel totally confident about accept/reject), I recently wrote a blog post where I explain how *I* approach reviewing. Essentially:

accept = correct AND (result OR idea) [...]
Reposted by Austin Tripp
valenceai.bsky.social
1/ At Valence Labs, @recursionpharma.bsky.social's AI research engine, we’re focused on advancing drug discovery outcomes through cutting-edge computational methods

Today, we're excited to share our vision for building virtual cells, guided by the predict-explain-discover framework 🧵
austinjtripp.bsky.social
If none of this makes any sense to you but you think multi-objective optimization is relevant, check out my full post below (where I explain MOO in more detail too). Bonus: also has an interactive visualization (kudos to Claude 3.7)

www.austintripp.ca/blog/2025-05...
Chebyshev Scalarization Explained
I've been reading about multi-objective optimization recently.1 Many papers state limitations of "linear scalarization" approaches, mainly that it might not be able to represent all Pareto-optimal sol
www.austintripp.ca
austinjtripp.bsky.social
2. Unfortunately, maximizing the Chebyshev objective may produce points which are *not* Pareto optimal (so some filtering might be required)

...
austinjtripp.bsky.social
For anybody working on multi-objective optimization: I recently did a deep-dive on Chebyshev scalarization and wrote a blog post. Key findings:

1. Unlike linear scalarization, varying the weights of Chebyshev scalarization will find *all* points on the Pareto front (not just the convex part)

...
Reposted by Austin Tripp
valenceai.bsky.social
(1/3)The poster submission deadline for MoML 2025 has been extended to May 20th, 2025.

Don’t miss an opportunity to share your work at this years conference.

Submit here: portal.ml4dd.com/moml-2025-po...
austinjtripp.bsky.social
Really interesting essay- disagreements about AI existential risk might *really* be disagreements about dual-use nature of future technologies (since this is the vector people think AI could cause extinction).
michaelnielsen.bsky.social
New essay exploring why experts so strongly disagree about existential risk from ASI, and why focusing on alignment as the primary goal may be a fundamental mistake
michaelnotebook.com/xriskbrief/i...
austinjtripp.bsky.social
(also, with ICML reviewing starting, this post will probably be the first in a series of posts about peer reviewing, stay tuned! 👀)
austinjtripp.bsky.social
Can anybody explain to me why so many ML papers study "offline model-based optimization"? This is essentially "1-shot optimization".

My main concern is "are there 1-shot optimization problems in real life"? Papers mention "drug discovery (DD)" as an example, but 1-shot DD never happens, no? 😂
Reposted by Austin Tripp
maosbot.bsky.social
People who are masking are smart for two reasons:

1. They do not want to get brain damage
2. They are not getting brain damage
austinjtripp.bsky.social
Second note: there are a lot of more standard topics too, eg AI for science stuff, I'm just not posting that here.
austinjtripp.bsky.social
Also funny:

- Position: ML researchers should try to ensure their code is not a heaping pile of dogsh*t

- Position: ML researchers should learn basic math (I'm talking to you, people who don't add error bars to their plots!!)

- Position: focusing on meaningless benchmarks is stupid
austinjtripp.bsky.social
Other abstracts:

- Position: what if we started holding ML papers to actual standards?

- Position: reviewers should actually read the papers they are reviewing

- Position: reviewers should *at least try* to judge whether paper's claims are true before accepting them
austinjtripp.bsky.social
(Note: titles are summarized/anonymized since I don't think I'm allowed to share)
austinjtripp.bsky.social
My bid screen for ICML position papers is basically:

- "Position: ML conference peer review is sh*t"

- "Position: Let's abolish conference reviewing"

- "Position: C'mon ML reviewers, surely we can do better than *this*"

Am I in a "review hate" echo chamber or is everybody else seeing this too? 😶
austinjtripp.bsky.social
Easy to get started on the antiviral challenge! I plan to submit some GP baselines from my PhD work (possibly with a collaborator).
polarishub.io
It only takes a few lines of code to get started with the @asapdiscovery.bsky.social x @omsf.io antiviral challenge!

In this short tutorial, we’ll show you how easy it is to login, load the challenge data, train a simple baseline model, and submit your predictions 🧵

youtu.be/KBZJFA_arAg
Tutorial: Getting Started with the Antiviral Challenge
YouTube video by PolarisHQ
youtu.be
Reposted by Austin Tripp
polarishub.io
🏁 The antiviral challenge is live! 🏁

Ready to test your skills on new data? Hosted in partnership with @asapdiscovery.bsky.social and @omsf.io, we've prepared detailed notebooks showcasing how to format your data and submit your solutions. 🧑‍💻
austinjtripp.bsky.social
A common issue I see in ML, both from ML "experts" and "users", is overly optimistic assumptions.

"experts" (people designing algs) usually assume the data is very simple

"users" (people using algs) usually assume that algorithms are more robust than they really are

Conclusion: always be careful!
Reposted by Austin Tripp
polarishub.io
📊 Imagining the Future of ML Evaluation in Drug Discovery

Our recent paper discussed the limitations of static leaderboards—they never tell the full story. What if we had a better and easier way of evaluating methods?

A vision for the future, in the latest blog 🧵

polarishub.io/blog/imagini...