Sam Jaques
@sejaques.bsky.social
190 followers 110 following 74 posts
Assistant prof at U Waterloo. Aspiring full-stack cryptographer. Loves math, plants, flashcards. Opinions reflect those of all past, present, and future employers.
Posts Media Videos Starter Packs
Reposted by Sam Jaques
dfaranha.bsky.social
The impact of Alfred Menezes in cryptography is profound. Francisco RH and I are organizing an afternoon session in Latincrypt to celebrate Alfred's career:

menezesfest.info

If you're coming to Medellín, consider attending!
MenezesFest 2025
MenezesFest brings together researchers, colleagues, and friends to celebrate the career and impact of Alfred Menezes.
menezesfest.info
sejaques.bsky.social
Nice! Now (to steal Luca's joke) it's only 11 more factors of 2 to go for SQISign to be faster than MLDSA?
sejaques.bsky.social
This is a valid signature for user i. Then when the adversary presents a forgery (w*,c*,z*) against user j, just subtract cr_j from z* and it's a forgery for your challenger. This works... but only because the public key was not hashed into the challenge! Very bad idea!
sejaques.bsky.social
Your challenger's public key is xP, so all the users you simulate for the multi-user adv can use PK_i=(x+r_i)P for some random r_i. If the adversary requests a signature on m from user r_i, you can send m to your challenger and get (w,c,z)=(yP,H(w||m),y+cx). Set z'=z+cr_i and return (w,c,z').
sejaques.bsky.social
Always bothers me when you lose the 1/N factor in a multi-user security proof. Was thinking about how to dodge it; consider this for Schnorr signatures: you are an active adversary against a single challenger, with access to a multi-user adversary.
Reposted by Sam Jaques
sejaques.bsky.social
I was way miscalibrated at the time and thought the extra Toffoli count would end up using more space in the end thanks to state distillation. Not sure how typical my perspective was

Important lesson in scientific celebrity culture nonetheless
sejaques.bsky.social
I've been reading "Burdens of proof", which makes an interesting point on this: law wants to operate on a vastly longer time scale than most file formats, for good reason.
sejaques.bsky.social
So we have an adversary that can decrypt c to a different message with a different key? They can just compute their own tag of this other key and message, hash it, and replace the "T" part of the ciphertext?
sejaques.bsky.social
Reasonable! When I read the screenshot you took, I see a lot of technical terms I can't contextualize. How meaningful is a "2 star relationship"? I can't tell but an expert in the field could.

Then again, scientists asked for quotes can absolutely give a rushed take and get things wrong.
sejaques.bsky.social
It's normal and good for journalists to talk to scientists in the same field but not associated to the research, as they can offer an informed but less biased take
sejaques.bsky.social
My current model of agriculture is we generally optimize for high yield at low labour, and there's room for high-yield and sustainable if we accept high labour inputs. Is this a plausible and useful perspective?
sejaques.bsky.social
Oh of course not, it would be a tourist attraction. Maybe a quirky hotel
sejaques.bsky.social
I wouldn't say steady: arxiv.org/abs/2009.05045 tries to extrapolate and the data looks really noisy. E.g., fig. 8. If we put today's devices on this, the best would maybe on the orange line
Graph of physical qubits vs. year. There is a cluster of points in the middle, with 3 lines trying to extrapolate forward, but with wide error margins.
sejaques.bsky.social
Probably closer to 13 doublings if we look at chips with all the good properties we want. There hasn't been a consistent exponential growth yet.
sejaques.bsky.social
Craig Gidney's work tackles that question: arxiv.org/abs/2505.159.... Check out the figures in the appendix: the physical qubits are used quite densely!
Figure 14 from Gidney's paper showing a dense 3-d pipe diagram of a surface code layout of a lookup. Most of the 3-d space is used in some way.
sejaques.bsky.social
If I get what you're talking about: a different technique (arxiv.org/abs/1905.100...) compresses the output bits, which is incompatible (if you compress input as well, you can factor with a classically simulatable # of qubits: likely impossible).
sejaques.bsky.social
And on a network of quantum computers, you'd have to re-optimize the algorithm, which would push the resource estimates back up
sejaques.bsky.social
To be clear there is no 2100 qubit device! Maybe I should rewrite that part :) but the estimates assume one device. There are known methods to network quantum devices together, but the tech is lagging behind a bit compared to the speed and quality of of one device
sejaques.bsky.social
A 20x improvement warrants an "extra"!
sejaques.bsky.social
An out-of-schedule update to my quantum landscape chart: sam-jaques.appspot.com/quantum_land..., prompted by
@craiggidney.bsky.social 's new paper: arxiv.org/abs/2505.15917.

A startling jump (20x) in how easy quantum factoring can be!

Also: much improved web design!
A chart for quantum computers, of number of qubits versus error rate, on a logarithmic scale. Broadly it shows a large gap between current quantum computers in the bottom left, and a curve in the top right of the resources they need to break RSA.