Adrian Hill
banner
adrianhill.de
Adrian Hill
@adrianhill.de
PhD student at @bifold.berlin, Machine Learning Group, TU Berlin.
Automatic Differentiation, Explainable AI and #JuliaLang.
Open source person: adrianhill.de/projects
Congratulations! 🥳
January 7, 2026 at 4:16 PM
That @void.comind.network sticker is awesome!
December 4, 2025 at 7:31 PM
You‘re in luck, we offer a PyTorch implementation!
December 3, 2025 at 7:47 PM
The paper, poster and code in @julialang.org and PyTorch can be found here:
neurips.cc/virtual/2025...

Joint work with Neal McKee, Johannes Maeß, Stefan Blücher and Klaus-Robert Müller, @bifold.berlin.
NeurIPS Poster Smoothed Differentiation Efficiently Mitigates Shattered Gradients in ExplanationsNeurIPS 2025
neurips.cc
December 3, 2025 at 7:36 PM
I'm excited to see whether our idea translates to general MC integration over Jacobians and gradients outside of XAI. Please don't hesitate to talk to us if you have ideas for applications!
December 3, 2025 at 7:36 PM
Our proposed SmoothDiff method (see first post) offers a bias-variance tradeoff: By neglecting cross-covariances, both sample efficiency and computational speed are improved over naive Monte Carlo integration.
December 3, 2025 at 7:36 PM
To reduce the influence of white noise, we want to apply a Gaussian convolution (in feature space) as a low-pass filter.
Unfortunately, this convolution is computationally infeasible in high dimensions. Naive Monte Carlo approximation results in the popular SmoothGrad method.
December 3, 2025 at 7:36 PM
Input feature attributions are a popular tool to explain ML models, increasing their trustworthiness. In this work, we are interested in the gradient of a given output w.r.t. its input features.
Unfortunately, gradients of deep NN resemble white noise, rendering them uninformative:
December 3, 2025 at 7:36 PM
An even scarier thought is that similar systems of checks and balances are likely silently failing across all of society. Academia is just uniquely transparent, making it look like patient zero.
November 28, 2025 at 7:36 AM
Technical issues aside, it seems that the signal-to-noise ratio on both sides of the review process has reached a critical low point. And the academic community hasn’t yet figured out how to deal with this.
November 28, 2025 at 7:27 AM
Judging by the code, I agree.
However, if I had to create these diagrams from scratch, I think I would prioritize Typst's error messages and almost instant compilation over TikZ' more concise syntax.
November 24, 2025 at 1:55 PM
There is also a TikZ-like package called CeTZ, but I haven't found an excuse to try it yet:
github.com/cetz-package...
November 24, 2025 at 12:57 PM
Yeah, that's fair. Some new formatting tweaks have recently been made available (e.g. bsky.app/profile/adri...), but vertical justification (especially with inline maths) still visibly lags behind TeX. I found this blog post by the Typst team to be both insightful and promising:
TeX and Typst: Layout Models | Laurenz's Blog
An exploration of the layout models of TeX and Typst.
laurmaedje.github.io
November 24, 2025 at 12:57 PM
Which features do you feel are missing?
November 24, 2025 at 10:30 AM
Wow, I had no idea that it had been around for so long! The documentation is excellent and a joy to read. Thank you for your efforts!
November 4, 2025 at 3:41 PM
The result of my first attempt:
If you are at #ICLR2025 and want to chat about automatic sparse differentiation (or just grab a sticker), come see me at poster 471!
November 3, 2025 at 8:52 PM
Adding this to your document should get you started:

#set par(justify: true, justification-limits: (
spacing: (min: 66.67% + 0pt, max: 150% + 0pt),
tracking: (min: -0.01em, max: 0.02em),
))
November 3, 2025 at 3:41 PM