Nick Boyd
nboyd.bsky.social
Nick Boyd
@nboyd.bsky.social
optimization, inverse problems, also proteins. ml at escalante. formerly: atomicai, xgenomes, broad, berkeley.
I’ve got a physical copy 😅
January 21, 2026 at 8:19 PM
JAX projects are more modular in my experience: it's sometimes really hard to get two torch projects to install in the same environment let alone interoperate nicely

probably I did too much functional programming + Julia in my formative years
January 13, 2026 at 5:16 PM
speed/JIT/parallelization are really nice but it’s mostly a style thing for me. I find most large torch projects incomprehensible: lots of OO/imperative code/manual batching etc + frameworks like lightning/omegaconf. I can't go back to life before vmap & other higher-order functions.
January 13, 2026 at 5:14 PM
inspired by @delalamo.xyz
Done! Can be installed with `pip install graphrelax`
January 13, 2026 at 3:01 PM
TL;DR: this was a really fun exercise but now is probably a good time to bet against me on manifold.markets/Proteinbase/...
In the Nipah binder competition, which protein designer will have proteins that bind?
Nipah is one of the deadliest viruses in the world and considered one of the top future pandemic risks. We're hosting a protein design competition on Proteinbase where people can design binders agai...
manifold.markets
January 8, 2026 at 3:51 PM
To speculate wildly though: the Boltz2 confidence module seems really, really easy to please even compared to a single AF2-multimer model. I wonder if this means hallucination is more likely to produce interfaces Boltz2 likes but AF2-SC (and likely physics 😅) does not.
December 20, 2025 at 3:52 PM
IMO it’s hard to draw conclusions from these data because each method has so many hyper-parameters. There isn't much work on AF3-gen hallucination; BindCraft is the result of some really careful and brilliant HPO. I was honestly surprised to get hits with Boltz2 for the work described in that post.
December 20, 2025 at 3:42 PM
I really enjoyed this paper, thank you! Did you consider AF2 initial guess or AF2 rank for AF2-ss with the native or MSA-predicted structure as input? I wonder if the improved native structure performance would be worth the likely increase in false positives
December 16, 2025 at 4:31 PM
not surprising this works, but it seems to improve Boltz2 IPTM for minibinders. another option is to simply use your favorite helix bundle or alpha solenoid as a scaffold or SS constraint
November 4, 2025 at 3:18 PM
Pretty interesting that AFAICT the filtering was done after the fact (so, library 1 had no filtering). This could make it an excellent dataset for training/testing filters/rankers. Too bad it looks like the dataset is not public
September 29, 2025 at 7:40 PM