Krishna Balasubramanian
@krizna.bsky.social
310 followers 180 following 19 posts
https://sites.google.com/view/kriznakumar/ Associate professor at @ucdavis #machinelearning #deeplearning #probability #statistics #optimization #sampling
Posts Media Videos Starter Packs
Reposted by Krishna Balasubramanian
arxiv-stat-ml.bsky.social
Krishnakumar Balasubramanian, Nathan Ross
Finite-Dimensional Gaussian Approximation for Deep Neural Networks: Universality in Random Weights
https://arxiv.org/abs/2507.12686
krizna.bsky.social
New theory for simulated tempering using restricted spectral gap with arbitrary local MCMC samplers under multi-modality.

When applied to simulated tempering Metropolis-Hasting algorithm for sampling from Gaussian mixture models, we obtain high-accuracy TV guarantees.
Restricted Spectral Gap Decomposition for Simulated Tempering Targeting Mixture Distributions
Simulated tempering is a widely used strategy for sampling from multimodal distributions. In this paper, we consider simulated tempering combined with an arbitrary local Markov chain Monte Carlo sampl...
arxiv.org
krizna.bsky.social
We implement these oracles using heat-kernel truncation & Varadhan's asymptotics, linking our method to entropy-regularized proximal point method on Wasserstein spaces, in the latter case.

Joint work with Yunrui Guan and @shiqianma.bsky.social
krizna.bsky.social
New work on Riemannian Proximal Sampler, to sample on Riemannian manifolds:

arxiv.org/abs/2502.07265

Comes with high-accuracy (i.e., log(1/eps), where eps is tolerance) guarantees with exact and inexact oracles for Manifold Brownian Increments and Riemannian Heat-kernels
Riemannian Proximal Sampler for High-accuracy Sampling on Manifolds
We introduce the Riemannian Proximal Sampler, a method for sampling from densities defined on Riemannian manifolds. The performance of this sampler critically depends on two key oracles: the Manifold ...
arxiv.org
krizna.bsky.social
Happy to have this paper on Improved rates for Stein Variational Gradient Descent accepted as an oral presentation at #ICLR2025

arxiv.org/abs/2409.08469

Only theory, No deep learning (although techniques useful for DL), No experiments in this time of scale and AGI :)
Improved Finite-Particle Convergence Rates for Stein Variational Gradient Descent
We provide finite-particle convergence rates for the Stein Variational Gradient Descent (SVGD) algorithm in the Kernelized Stein Discrepancy ($\mathsf{KSD}$) and Wasserstein-2 metrics. Our key insight...
arxiv.org
krizna.bsky.social
Our bounds show how key factors—like the number of matches and treatment balance—impact Gaussian approximation accuracy.

We also introduce multiplier bootstrap bounds for obtaining finite-sample valid, data-driven confidence intervals.
krizna.bsky.social
Matching-based ATE estimators align treated and control units to estimate causal effects without strong parametric assumptions.

Using Malliavin-Stein method we establish Gaussian Approximation bounds for these estimators.
Reposted by Krishna Balasubramanian
wtgowers.bsky.social
It seems that OpenAI's latest model, o3, can solve 25% of problems on a database called FrontierMath, created by EpochAI, where previous LLMs could only solve 2%. On Twitter I am quoted as saying, "Getting even one question right would be well beyond what we can do now, let alone saturating them."
krizna.bsky.social
Von Neumann: With 4 parameters, I can fit an elephant. With 5, I can make it wiggle its trunk.

OpenAI: Hold my gazillion parameter Sora model - I’ll make the elephant out of leaves and teach it to dance.

youtu.be/4QG_MGEBQow?...
Generated by Sora AI, elephant
YouTube video by AI Creation Today
youtu.be
krizna.bsky.social
thanks, resent the email now!
krizna.bsky.social
How well RF performs in these settings? That’s still an open question.

Bottom-line: Time to compare SGD-trained NNs with RF and not kernel methods!
krizna.bsky.social
Going beyond mean-field regime for SGD trained NNs certainly helps. Recent works connect learnability of SGD trained NNs with leap complexity and information exponent of function classes (like single and multi index models) with the goal of explaining feature learning.
krizna.bsky.social
It also creates an intriguing parallel with NNs: greedy-trained partitioning models and SGD-trained NNs (in the mean-field regime) both thrive under specific structural assumptions (eg. MSP) but struggle otherwise.

However, under MSP, greedy RFs are provably better that SGD-trained 2-NNs!
krizna.bsky.social
In our work:

arxiv.org/abs/2411.04394

we show that If the true regression function satisfies MSP, greedy training works well with 𝑂(log 𝑑) samples.

Otherwise, it struggles.

This settles the question of learnability for greedy recursive partitioning algorithms like CART.
krizna.bsky.social
MSP is used to argue that SGD trained 2-layer NNs are better than vanilla kernel methods.

But how do neural nets compare with random forest (RF) trained using greedy algorithms like CART?
krizna.bsky.social
How to characterize the learnability of local algorithms ?

The Merged Staircase Property (MSP) proposed by Abbe et al. (2022) is used to completely characterize the learnability of SGD-trained 2-layer neural networks (NN) in the regime where mean-field approximation holds for SGD.
krizna.bsky.social
Yes, but is the cover indicative of RL notations by any chance :P