Supanut Thanasilp
supanut-thanasilp.bsky.social
Supanut Thanasilp
@supanut-thanasilp.bsky.social
Quantum machine learning and computation
* Faculty position, Chulalongkorn University, Thailand
* Postdoc, EPFL, Switzerland
* PhD, CQT, Singapore
Thanks so much to my co-authors Weijie Xiong @qzoeholmes.bsky.social @aangrisani.bsky.social Yudai Suzuki @thipchotibut.bsky.social It's real fun to work with you all 😃🙌

Also, special thanks to @mvscerezo.bsky.social Martin Larocca for their valuable insight on correlated Haar random unitaries 🌮
May 17, 2025 at 8:22 AM
Episode 2: Oh what ! I forgot now

We prove that in extreme-scrambling QRPs, old inputs or initial states get forgotten exponentially fast (in both time steps and system size !). Too much scrambling -> you effectively “MIB” zap each past input.
May 17, 2025 at 8:22 AM
To address this challenge, we apply tensor-diagram approaches to unroll multi-step QRP into a single high-moment Haar integral on a larger dimension amenable for scalability and memory analysis.
May 17, 2025 at 8:22 AM
Our key messages can be summarized as

🎯 Big scrambling in quantum reservoirs helps at small sizes but kills input-sensitivity at large scale
🎯 Memory of older states decays exponentially (in both time steps and system size !)
🎯 Noise can make us forget even faster
May 17, 2025 at 8:22 AM
The QRP model processes input time series of quantum states. Here we model the extreme scrambling reservoir as an instance drawn from a high-order design unitary ensemble.
May 17, 2025 at 8:22 AM
Once upon a time a myth in Quantum Reservoir Processing (QRP) goes by “more chaos = richer feature map = better”

Doomed by their own chaotic dynamics, QRP may not scale in the extreme scrambling limit.

Check out our new Star Wa… I mean paper on arxiv: scirate.com/arxiv/2505.1...
May 17, 2025 at 8:22 AM