majhas.github.io
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations π
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations π
Self-refining training reduces total runtime up to 4 times compared to the baseline
and up to 2 times compared to the fully-supervised approach!!!
Less need for large pre-generated datasets β training and sampling happen in parallel.
Self-refining training reduces total runtime up to 4 times compared to the baseline
and up to 2 times compared to the fully-supervised approach!!!
Less need for large pre-generated datasets β training and sampling happen in parallel.
We simulate molecular dynamics using each modelβs energy predictions and evaluate accuracy along the trajectory.
Models trained with self-refinement stay accurate even far from the training distribution β while baselines quickly degrade.
We simulate molecular dynamics using each modelβs energy predictions and evaluate accuracy along the trajectory.
Models trained with self-refinement stay accurate even far from the training distribution β while baselines quickly degrade.
Our method achieves low energy error with as few as 25 conformations.
With 10Γ less data, it matches or outperforms fully supervised baselines.
This is especially important in settings where labeled data is expensive or unavailable.
Our method achieves low energy error with as few as 25 conformations.
With 10Γ less data, it matches or outperforms fully supervised baselines.
This is especially important in settings where labeled data is expensive or unavailable.
π Use the current model to sample conformations via MCMC
π Use those conformations to minimize energy and update the model
Everything runs asynchronously, without need for labeled data and minimal number of conformations from a dataset!
π Use the current model to sample conformations via MCMC
π Use those conformations to minimize energy and update the model
Everything runs asynchronously, without need for labeled data and minimal number of conformations from a dataset!
Jointly minimizing this bound wrt ΞΈ and q yields
β A model that predicts the ground-state solutions
β Samples that match the ground true density
Jointly minimizing this bound wrt ΞΈ and q yields
β A model that predicts the ground-state solutions
β Samples that match the ground true density
Boltzmann distribution
This isn't a typical ML setup because
β No samples from the density - canβt train a generative model
β No density - canβt sample via Monte Carlo!
Boltzmann distribution
This isn't a typical ML setup because
β No samples from the density - canβt train a generative model
β No density - canβt sample via Monte Carlo!
This presents a bottleneck for MD/sampling.
We want to amortize this - train a model that generalizes across geometries R.
This presents a bottleneck for MD/sampling.
We want to amortize this - train a model that generalizes across geometries R.
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
π arxiv.org/abs/2506.01225
π» github.com/majhas/self-...
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
π arxiv.org/abs/2506.01225
π» github.com/majhas/self-...
How do we build neural decoders that are:
β‘οΈ fast enough for real-time use
π― accurate across diverse tasks
π generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
π§΅1/7
How do we build neural decoders that are:
β‘οΈ fast enough for real-time use
π― accurate across diverse tasks
π generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
π§΅1/7
π Introducing SuperDiff π¦ΉββοΈ β a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
π Introducing SuperDiff π¦ΉββοΈ β a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
π website: sites.google.com/view/fpiwork...
π₯ Call for papers: sites.google.com/view/fpiwork...
more details in thread belowπ π§΅
π website: sites.google.com/view/fpiwork...
π₯ Call for papers: sites.google.com/view/fpiwork...
more details in thread belowπ π§΅
come see us at @neuripsconf.bsky.social !!
come see us at @neuripsconf.bsky.social !!
πPaper: arxiv.org/abs/2410.22388
πGithub: github.com/shenoynikhil...