Hamed Shirzad
banner
hamedshirzad.bsky.social
Hamed Shirzad
@hamedshirzad.bsky.social
PhD student in Computer Science at UBC | Exploring Machine Learning on Graphs

https://www.hamedshirzad.com/
🔗 Read their paper: lnkd.in/gECWw_tR
🧵 Or check out a great summary thread from Yi (Joshua): bsky.app/profile/josh...
If you're attending ICLR, take a visit to their poster and talk:
📍 Poster Hall 3+2B #376 on Fri, Apr 25 at 15:00
🎤 Oral in Session 6A on Sat, Apr 26 at 16:30
April 23, 2025 at 12:07 PM
There’s more in the paper: arxiv.org/abs/2411.16278 (Appendix F)

We’d love to see anyone do more analysis of these things! To get you started, our scores are available from the "Attention Score Analysis" notebook in our repo:
github.com/hamed1375/Sp...
Sp_Exphormer/Attention Score Analysis.ipynb at main · hamed1375/Sp_Exphormer
Even Sparser Graph Transformers. Contribute to hamed1375/Sp_Exphormer development by creating an account on GitHub.
github.com
December 12, 2024 at 12:33 AM
How much do nodes attend to graph edges, versus expander edges or self-loops?

On the Photo dataset (homophilic), attention mainly comes from graph edges. On the Actor dataset (heterophilic), self-loops and expander edges play a major role.
December 12, 2024 at 12:32 AM
Q. Is selecting the top few attention scores effective?

A. Top-k scores rarely cover the attention sum across nodes, unless the graph has a very small average degree. Results are consistent for both dim=4 and dim=64.
December 12, 2024 at 12:31 AM
Q. How similar are attention scores across layers?

A. In all experiments, the first layer's attention scores differed significantly, but scores were very consistent for all the other layers.
December 12, 2024 at 12:31 AM
Q. How do attention scores change across layers?

A. The first layer consistently shows much higher entropy (more uniform attention across nodes), while deeper layers have sharper attention scores.
December 12, 2024 at 12:30 AM
We trained 100 single-head Transformers (masked for graph edges w/ and w/o expander graphs + self-loops) on Photo & Actor, with hidden dims 4 to 64.

Q. Are attention scores consistent across widths?

A. The distributions of where a node attends are pretty consistent.
December 12, 2024 at 12:30 AM
🚨 Come chat with us at NeurIPS next week! 🚨
🗓️ Thursday, Dec 12
⏰ 11:00 AM–2:00 PM PST
📍 East Exhibit Hall A-C, Poster #3010
📄 Paper: arxiv.org/abs/2411.16278
💻 Code: github.com/hamed1375/Sp...
See you there! 🙌✨
[13/13]
December 5, 2024 at 8:20 PM
For more on the compression results see our workshop paper “A Theory for Compressibility of Graph Transformers for Transductive Learning”; there will be a thread on this too!
Workshop paper link: arxiv.org/abs/2411.13028
A Theory for Compressibility of Graph Transformers for Transductive Learning
Transductive tasks on graphs differ fundamentally from typical supervised machine learning tasks, as the independent and identically distributed (i.i.d.) assumption does not hold among samples. Instea...
arxiv.org
December 5, 2024 at 8:20 PM
We have theoretical guarantees, too: on compression (smaller nets can estimate the attention scores), and that sparsification works even from an approximate attention matrix (from the narrow net).
[11/13]
December 5, 2024 at 8:20 PM
Downsampling the edges and regular-degree calculations can make you even faster and more memory efficient than GCN!
December 5, 2024 at 8:19 PM
But we can scale to graphs Exphormer couldn’t even dream of:
December 5, 2024 at 8:19 PM
How much accuracy do we lose compared to an Exphormer with many more edges (and way more memory usage)? Not much.
December 5, 2024 at 8:18 PM
Now, with sparse (meaningful) edges, k-hop sampling is feasible again even across several layers. Memory and runtime can be traded off by choosing how many “core nodes” we expand from.
[7/13]
December 5, 2024 at 8:18 PM
By sampling a regular degree, graph computations are much more efficient (simple batched matmul instead of needing a scatter operation). Naive implementations of sampling can also be really slow, but reservoir sampling makes resampling edges per epoch no big deal.
December 5, 2024 at 8:17 PM
Now, we extract the attention scores from the initial network, and use them to sample a sparse attention graph for a bigger model. Attention scores vary on each layer, but no problem: we sample neighbors per layer. Memory usage plummets!
[5/13]
December 5, 2024 at 8:16 PM
But not all the edges matter – if we know which won’t be used, we can just drop them and get a sparser graph/smaller k-hop neighborhoods. It turns out a small network (same arch, tiny hidden dim, minor tweaks) can be a really good proxy for which edges will matter!
[4/13]
December 5, 2024 at 8:16 PM
For very large graphs, though, even very simple GNNs need batching. One way is k-hop neighborhood selection, but expander graphs are specifically designed so that k-hop neighborhoods are big. Other batching approaches can drop important edges and kill the advantages of GT.
[3/13]
December 5, 2024 at 8:15 PM