Reyhaneh Aghaei Saem
@reyhanehaghaeisaem.bsky.social
66 followers 50 following 12 posts
PhD student at EPFL
Posts Media Videos Starter Packs
Many thanks to all my collaborators, Behrang Tafreshi,
@qzoeholmes.bsky.social, and @supanut-thanasilp.bsky.social. 😊

This work would not have been possible without their support, and it was a truly amazing experience.
It is important to note that while our results highlight that some methods hoped to avoid exponential concentration still suffer the effects of exponential concentration, this does not imply they cannot be used in any way to boost scalability or provide alternative training benefits.
Hence, some proposals—such as forms of quantum natural gradient, CVaR, classical NN-assisted initialization, and rescaled gradient optimization—that were hoped to mitigate BP may still suffer from it. We provide numerical simulations of these optimization methods under different shot budgets.
At the end, we present a practical step-by-step guideline to determine whether a training or encoding procedure is vulnerable to scalability limitations due to exponential concentration.
To illustrate its practicality, we apply it to the training of a standard VQA loss with the BP landscape using a traditional gradient-based method. Then, we can show that the optimization trajectory will follow a random walk trajectory with high probability.
Here is a very nice schematic figure of this key result.
Using this definition, we can pin down the practical consequences of the concentration in Theorem 1, using tools from hypothesis testing.

The direct consequence of this result is that no classical post-processing removes this indistinguishability.
The scalability of such procedures depends on whether measurement outcomes carry information about the variables. This motivates a shift in focus: rather than analyzing concentration at the level of the loss function, we study it at the level of POVM outcome probabilities for individual quantities.
To formalize this, we consider a general procedure that covers a wide range of parameterized quantum models. In particular, many procedures used in variational quantum computing involve processing sets of parameter-dependent quantities.
Here, by analyzing concentration at the level of measurement outcome probabilities and leveraging tools from hypothesis testing, we develop a practical framework for diagnosing whether a parameterized quantum model is inhibited by exponential concentration.
There are an increasingly large number of proposals for circumventing exponential concentration. However, given the subtle interplay between quantum measurements and classical processing strategies, care needs to be taken to determine whether these approaches do in fact help in practice.
New preprint on arXiv 🚀

Link: scirate.com/arxiv/2507.2...

We present a practical step-by-step guideline to determine whether a procedure that claims to circumvent exponential concentration actually works in practice.

See the following 🧵 for more details.