Johannes Jakob Meyer
@jjmeyer.bsky.social
470 followers 180 following 27 posts
Quantum Information @ FU Berlin. Imprint on www.johannesjakobmeyer.com
Posts Media Videos Starter Packs
jjmeyer.bsky.social
A big thanks to my coauthors Asad, Jacopo, Lorenzo, Sofiene and Jens.
jjmeyer.bsky.social
This is in stark contrast to how people approached computational information theory to date, where the approaches focused on single-shot statements. These are of course more exact, but also much more complicated to manipulate and build intuition for.
jjmeyer.bsky.social
We believe that our definition has a nice level of abstraction that captures the essence of information theory under computational constraints while at the same time having the look and feel of unbounded information theory.
jjmeyer.bsky.social
I invite you to discover many exciting things in the paper, like computationally measured divergences and a computational Stein's Lemma or a computational Rains bound on efficiently distillable entanglement.
jjmeyer.bsky.social
Another highlight is the fact that we can obtain separations in hypothesis testing, both between bounded and unbounded observers as well as classical-quantum separations. This means there exist classical distributions testable when having access to a quantum computer but not classically.
jjmeyer.bsky.social
We can use the computational relative entropy as a mother quantity. We exemplify this by defining a computational entropy S̲(ρₙ):=−D(ρₙ‖𝕀) and show that it quantifies the best rate at which we can compress a state under complexity restrictions -- just like the entropy in the unbounded setting.
jjmeyer.bsky.social
For example, we obtain a computational version of Pinsker's inequality, which looks exactly like the unbounded relation only that the involved quantities carry underscores, marking them as computational quantities.
The computational version of Pinsker's inequality.
jjmeyer.bsky.social
One of our main contributions is to turn this process of "polynomial regularization" into a rigorous notion. It allows us to reap the benefits of regularizing to obtain simpler expressions that closely resemble their counterparts from unbounded information theory.
jjmeyer.bsky.social
Of course, talking about polynomial scaling only makes sense when there is a scaling parameter. Therefore, all our results pertain to families of quantum states indexed by some parameter n. This is often the number of qubits, but need not be.
jjmeyer.bsky.social
We propose the following remedy: we define the computational relative entropy D̲(ρₙ‖σₙ) as the best error exponent in asymmetric hypothesis testing when restricted to polynomially many copies and polynomial-time tests and take the polynomials to be arbitrarily large.
jjmeyer.bsky.social
But there is a catch! The connection between hypothesis testing and the relative entropy only holds if arbitrary tests can be performed. The optimal tests, however, could be exponentially complex to implement. We also need a large number of copies of the state to get sufficient statistics.
jjmeyer.bsky.social
The fact that the relative entropy and its derived quantities appear so often in information theory is that it quantifies the best achievable error exponent in asymmetric hypothesis testing, a primitive many important information processing tasks can be reduced to.
jjmeyer.bsky.social
The relative entropy D(ρ‖σ) is a fundamental quantity in classical and quantum information theory. It appears in many important theorems and serves as a mother quantity to derive other important measures from -- for example mutual information or entropy.
The definition of the Umegaki relative entropy.
jjmeyer.bsky.social
You think we already have enough entropies and divergence measures? I don't think so!

In our latest work (arxiv.org/abs/2509.20472), we introduce yet another information theoretic measure -- the computational relative entropy.

Let's have a quick rundown👇
The distracted boyfriend meme, with the "girlfriend" being the relative entropy and the distraction being the computational relative entropy.
jjmeyer.bsky.social
I think the best bit is that they have a guy named "Clifford Mapp" on their team!
jjmeyer.bsky.social
I guess that particular myth was formulated so that you can debunk it. I never heard anyone claim that error mitigation cannot be used, just that it is no scalable solution for the problem.
jjmeyer.bsky.social
We should all recall once in a while that we as a community are in a hamster wheel of our own making. This means we can also just choose to be nicer and more helpful to each other.
jjmeyer.bsky.social
That said, one line rejects should not happen. At the end of the day we've all been on the other side and justifiably annoyed by bad reviews.
jjmeyer.bsky.social
Therefore, it's more of an answer to the question: "how likely is this to cross the bar relative to the other submissions in your batch?". In that way, "weak reject" doesn't mean the submission is not good science, but just that there is so much other good stuff out there.
jjmeyer.bsky.social
Something that is important for context – and that I only realized after seeing the process from the inside – is that the scores are not meant to judge the quality of the submission. The task of the PC is to make a program.