- Ph.D. from the Shirts Group at CU Boulder.
- Keen on compchem, deep learning & education.
- Rookie runner.
- Originally from Taiwan.
- Check my MD tutorials: https://weitsehsu.com/
With careful filtering, co-folding predictions can indeed teach ML about binding affinity.
👉 Read the full JCIM paper: pubs.acs.org/doi/full/10....
Work with Aniket Magarkar
@boehringerglobal.bsky.social and @philbiggin.bsky.social @ox.ac.uk
(6/6)
With careful filtering, co-folding predictions can indeed teach ML about binding affinity.
👉 Read the full JCIM paper: pubs.acs.org/doi/full/10....
Work with Aniket Magarkar
@boehringerglobal.bsky.social and @philbiggin.bsky.social @ox.ac.uk
(6/6)
- AEV-PLIG beats Boltz-2 in 4 target classes in the FEP benchmark (loses 1, ties 6); both are competitive with FEP+ in some cases.
- ipLDDT & ligand pLDDT are also effective filters; pTM, PAE, PDE are not
- Boltz confidence seems to generalize better than its structure module
(5/6)
- AEV-PLIG beats Boltz-2 in 4 target classes in the FEP benchmark (loses 1, ties 6); both are competitive with FEP+ in some cases.
- ipLDDT & ligand pLDDT are also effective filters; pTM, PAE, PDE are not
- Boltz confidence seems to generalize better than its structure module
(5/6)
👉 Yes — with careful filtering. We see no performance difference b/w models trained on:
- experimental structures
- corresponding co-folding predictions
This holds across AEV-PLIG, EHIGN, and RF-Score.
(4/6)
👉 Yes — with careful filtering. We see no performance difference b/w models trained on:
- experimental structures
- corresponding co-folding predictions
This holds across AEV-PLIG, EHIGN, and RF-Score.
(4/6)
👉 From reproducing HiQBind with Boltz-1x, a few simple heuristics are recommended high-quality cofolding augmentation:
1️⃣ single-chain systems
2️⃣ Boltz confidence > 0.9
3️⃣ train–test similarity > 60%
(3/6)
👉 From reproducing HiQBind with Boltz-1x, a few simple heuristics are recommended high-quality cofolding augmentation:
1️⃣ single-chain systems
2️⃣ Boltz confidence > 0.9
3️⃣ train–test similarity > 60%
(3/6)
👉 Short answer: only if the added data are high-quality. Adding BindingNet v1 clearly improved performance, but v2 did not—despite being 10x larger—due to its substantially lower quality.
Quality beats quantity.
(2/6)
👉 Short answer: only if the added data are high-quality. Adding BindingNet v1 clearly improved performance, but v2 did not—despite being 10x larger—due to its substantially lower quality.
Quality beats quantity.
(2/6)