Max Zhdanov
@maxxxzdn.bsky.social
530 followers 230 following 86 posts
PhD candidate at AMLab with Max Welling and Jan-Willem van de Meent. Research in physics-inspired and geometric deep learning.
Posts Media Videos Starter Packs
maxxxzdn.bsky.social
thats a wrap!

you can find the code here: github.com/maxxxzdn/erwin

and here is a cute visualisation of Ball Tree building:
maxxxzdn.bsky.social
To scale Erwin further to gigantic industrial datasets, we combine Transolver and Erwin by applying ball tree attention over latent tokens.

The key insight is that by using Erwin, we can afford larger bottleneck sizes while maintaining efficiency.

Stay tuned for updates!
maxxxzdn.bsky.social
To improve the processing of long-range information while keeping the cost sub-quadratic, we combine the ball tree with Native Sparse Attention (NSA), which align naturally with each other.

Accepted to Long-Context Foundation Models at ICML 2025!

paper: arxiv.org/abs/2506.12541
maxxxzdn.bsky.social
There are many possible directions for building upon this work. I will highlight two on which I have been working together with my students.
maxxxzdn.bsky.social
On top of the initial experiments in the first version of the paper, we include two more PDE-related benchmarks, where Erwin shows strong performance, achieving state-of-the-art on multiple tasks.
maxxxzdn.bsky.social
Original experiments: cosmology, molecular dynamics and turbulent fluid dynamics (EAGLE).
maxxxzdn.bsky.social
Ball tree representation comes with a huge advantage of contiguous memory layout, making all operations extremely simple and efficient.
maxxxzdn.bsky.social
Implemented naively, however, the model will not be able to exchange information between partitions - and thus unable to capture global interactions.

To overcome this, we adapt the idea from the Swin Transformer, but instead of shifting windows, we rotate trees.
maxxxzdn.bsky.social
For that reason, we impose a regular structure onto irregular data via ball trees.

That allows us to restrict attention computation to local partitions, reducing the cost to linear.
maxxxzdn.bsky.social
Erwin follows the second approach as we believe that working at full resolution and the original representation should be done for as long as possible - let the data guide the model to decide which information to use and which to discard.
maxxxzdn.bsky.social
In the context of modeling large physical systems, this is critical, as the largest applications can include millions of points, which makes full attention unrealistic.

There are two ways: either change the data representation or the data structure.
maxxxzdn.bsky.social
When it comes to irregular data, however, there is no natural ordering of points, hence standard sparse attention mechanisms break down as there is no guarantee that the locality they were based upon still exists.
maxxxzdn.bsky.social
We start with sparse (sub-quadratic) attention coming in different flavors but all following the same idea - exploiting the regular structure of the data to model interactions between tokens.
maxxxzdn.bsky.social
🤹 New blog post!

I write about our recent work on using hierarchical trees to enable sparse attention over irregular data (point clouds, meshes) - Erwin Transformer, accepted to ICML 2025

blog: maxxxzdn.github.io/blog/erwin/
paper: arxiv.org/abs/2502.17019

Compressed version in the thread below:
maxxxzdn.bsky.social
Can only speak for my ICML reviewing batch, but the hack of putting scary, convoluted and wrong math still works.
maxxxzdn.bsky.social
And that is a wrap! 🫔

We believe models like Erwin will enable the application of deep learning to physical tasks that handle large particle systems and where runtime was previously a bottleneck.

for details, see the preprint: arxiv.org/abs/2502.17019
maxxxzdn.bsky.social
Bonus: to minimize the computational overhead of ball tree construction, we develop a fast, parallelized implementation in C++.

15/N
maxxxzdn.bsky.social
On the large-scale EAGLE benchmark (1M meshes, each with ~3500 nodes), we achieve SOTA performance in simulating unsteady fluid dynamics.

14/N
maxxxzdn.bsky.social
At the molecular dynamics task, Erwin pushes the Pareto frontier in performance vs runtime, proving to be a viable alternative to message-passing based architectures.

13/N
maxxxzdn.bsky.social
The cosmological data exhibits long-range dependencies. As each point cloud is relatively large (5,000 points), this poses a challenge for message-passing models. Erwin, on the other hand, is able to capture those effects.

12/N
maxxxzdn.bsky.social
We validated Erwin's performance on a variety of large-scale tasks, including:
- cosmology (5k nodes per data point)
- molecular dynamics (~1k nodes per data point)
- turbulent fluid dynamics (~3.5k nodes per data point)

11/N
maxxxzdn.bsky.social
Due to the simplicity of implementation, Erwin is blazing fast. For a batch of 16 point clouds, 4096 points each, it only takes ~30 ms to compute the forward pass!

10/N
maxxxzdn.bsky.social
The ball tree is stored in memory contiguously - at each level of the tree, points in the same ball are stored next to each other.
This property is critical and allows us to implement the key operations described above simply via .view() or .mean()!

9/N
maxxxzdn.bsky.social
As Ball Tree Attention is local, we progressively coarsen and then refine the ball tree while keeping the ball size for attention fixed, following a U-Net-like architecture.
This allows us to learn multi-scale features and effectively increase the model's receptive field.

8/N
maxxxzdn.bsky.social
The main idea of the paper is to compute attention within the ball tree partitions.
Once the tree is built, one can choose the level of the tree and compute attention (Ball Tree Attention, BTA) within the balls in parallel.

7/N