ePrint Updates
@eprint.ing.bot
970 followers 1 following 5.3K posts
Unofficial bot tracking the IACR Cryptology ePrint Archive (eprint.iacr.org). Maintained by @str4d.xyz. Currently only posts about new papers. Author names are linkified to Bluesky accounts (cryptography.social); contact maintainer for inclusion/removal.
Posts Media Videos Starter Packs
eprint.ing.bot
Vectorized Falcon-Sign Implementations using SSE2, AVX2, AVX-512F, NEON, and RVV (Jipeng Zhang, Jiaheng Zhang) ia.cr/2025/1867
Abstract. Falcon, a NTRU-based digital signature algorithm, has been selected by NIST as one of the post-quantum cryptography (PQC) standards. Compared to verification, the signature generation of Falcon is relatively slow. One of the core operations in signature generation is discrete Gaussian sampling, which involves a component known as the BaseSampler. The BaseSampler accounts for up to 30% of the time required for signature generation, making it a significant performance bottleneck. This work aims to address this bottleneck.

We design a vectorized version of the BaseSample and provide optimized implementations across six different instruction sets: SSE2, AVX2, AVX-512F, NEON, RISC-V Vector (RVV), and RV64IM. The AVX2 implementation, for instance, achieves an 8.4× speedup over prior work. Additionally, we optimize the FFT/iFFT operations using RVV and RV64D. For the RVV implementation, we introduce a new method using strided load/store instructions, with 4+4 and 4+5 layer merging strategies for Falcon-512 and Falcon-1024, respectively, resulting in a speedup of more than 4×.

Finally, we present the results of our optimized implementations across eight different instruction sets for signature generation of Falcon. For instance, our AVX2, AVX-512F, and RV64GCVB implementations achieve performance improvements of 23%, 36%, and 59%, respectively, for signature generation of Falcon-512.
Image showing part 2 of abstract.
eprint.ing.bot
Succinct Line-Point Zero-Knowledge Arguments from Homomorphic Secret Sharing (Zhe Li, Chaoping Xing, Yizhou Yao, Chen Yuan, Mengmeng Zhou) ia.cr/2025/1866
Abstract. Dittmer, Ishai and Ostrovsky (ITC’21) proposed {} (LPZK), a simple “commit-and-prove” system, motivated by practical protocols for compressing correlated pseudorandomness used in secure multiparty computation (MPC). Typically, LPZK admits concretely efficient ZK protocols with a streaming, linear time prover, {}. A natural question raised in the context is how far can we go in minimizing the proof size, while maintaining the prover efficiency. Though a recent work by Lin, Xing and Yao (ASIACRYPT’24) gives an interactive LPZK with a sublinear proof size O(n + d²log |𝒞|), it is still far from being {}, where n, d, |𝒞| are referred to as input size, circuit depth, and circuit size, respectively.

In this work, we beat the proof size barrier and propose {}, by distilling techniques from orthogonal studies on homomorphic secret sharing and succinct garbling. Specifically, under variants of group/lattice-based assumptions, we show the followings:

i)  There exist succinct LPZK arguments with common reference string (CRS) size O(n^(2/3)), proof size O(n^(2/3)), prover time O(n^(4/3) + |𝒞|), verification time O(n + |𝒞|), and negligible soundness error, where both the prover and the verifier executions and be run in a streaming fashion.

ii) The above proof size can be further optimized to O(1), at the cost of a larger CRS size O(n), and prover time increased to O(n² + |𝒞|).

In general, our succinct LPZK arguments pave a new way for building designated-verifier zero-knowledge succinct non-interactive arguments of knowledge (dv-zkSNARKs), and new interesting features (e.g., streaming, constant sized proof with CRS size not proportional to the circuit size) are obtained for the first time along the way.
Image showing part 2 of abstract.
eprint.ing.bot
High-Throughput AES Transciphering using CKKS: Less than 1ms (Youngjin Bae, Jung Hee Cheon, Minsik Kang, Taeseong Kim) ia.cr/2025/1865
Abstract. Fully Homomorphic encryption (FHE) allows computation without decryption, but often suffers from a ciphertext expansion ratio and overhead. On the other hand, AES is a widely adopted symmetric block cipher known for its efficiency and compact ciphertext size. However, its symmetric nature prevents direct computation on encrypted data. Homomorphic transciphering bridges these two approaches by enabling computation on AES-encrypted data using FHE-encrypted AES keys, thereby combining the compactness of AES with the flexibility of FHE.

In this work, we present a high-throughput homomorphic AES transciphering based on the CKKS scheme. Our design takes advantage of the ring conversion technique to the conjugate-invariant ring [real_heaan] during the transciphering circuit, including bootstrapping, along with a suite of optimizations that reduce computational overhead. As a result, we achieved a throughput (per-block evaluation time) of 0.994ms—less than 1ms— outperforming the state-of-the-art [xboot] by 1.58× when processing 2¹⁵ AES-128 blocks under the same implementation environment, and support processing 2⁹ blocks within 3s on a single GPU.
Image showing part 2 of abstract.
eprint.ing.bot
On Limits on the Provable Consequences of Quantum Pseudorandomness (Samuel Bouaziz--Ermann, Minki Hhan, Garazi Muguruza, Quoc-Huy Vu) ia.cr/2025/1863
Abstract. There are various notions of quantum pseudorandomness, such as pseudorandom unitaries (PRUs), pseudorandom state generators (PRSGs), and pseudorandom function-like state generators (PRFSGs). Unlike the different notions of classical pseudorandomness, which are known to be existentially equivalent to each other, the relationship between quantum pseudorandomness has yet to be fully established.

We present some evidence suggesting that some quantum pseudorandomness is unlikely to be constructed from others, or at least is hard to construct unless some conjectures are false. This indicates that quantum pseudorandomness could behave quite differently from classical pseudorandomness. We study new oracle worlds where one form of quantum pseudorandomness exists but another does not, under certain assumptions or constraints, and we provide potential directions toward achieving full black-box separation. More precisely: - We give a unitary oracle relative to which PRFSGs exist, but PRUs without using ancilla do not. This can be extended to general PRUs if a structural property of the PRU algorithm can be proven. - Assuming a conjecture similar to an isoperimetric inequality, we show a unitary oracle world where log-length output PRFSGs exist, but proving the existence of quantum-computable pseudorandom generators (QPRGs) with negligible correctness error is as hard as proving that BQP ≠ QCMA. This result suggests that the inverse-polynomial error in the state-of-the-art construction of QPRGs from log-length PRSGs is inherent. - Assuming the same conjecture, we prove that some natural methods of constructing super-log-length output PRSGs from log-length output PRFSGs are impossible. This partly complements the known hardness of shrinking the PRSG output lengths. Along the way, we also discuss other potential approaches to extend the PRSG output lengths.

All our worlds are based on (variants of) oracles that output Haar-random quantum states for each bit string, which can be viewed as a quantum version of the random oracle model, where output strings are replaced by quantum states.

Our results highlight technical difficulties when dealing with ancillary registers, measurements, and adaptivity in the quantum setting. As one of our key technical tools, we show an intriguing gentle behavior of intermediate measurements in algorithms producing outcome states with high purity, which may be of independent interest.
Image showing part 2 of abstract. Image showing part 3 of abstract.
eprint.ing.bot
CuKEM: A Concise and Unified Hybrid Key Encapsulation Mechanism (Yiting Liu, Biming Zhou, Haodong Jiang) ia.cr/2025/1862
Abstract. In the post-quantum migration of the traditional key establishment protocol, hybrid key encapsulation mechanisms (KEMs) are recommended by standards bodies, including NIST, ETSI, and national security agencies like NCSC-UK, BSI-Germany etc. Recently, several hybrid KEMs with CCA security such as XOR-then-MAC, Dual-PRF and X-Wing (being standardized by IETF) are proposed based on CCA KEMs obtained by applying the complicated Fujisaki-Okamoto transform to public-key encryption (PKE) schemes. In some cryptographic protocols such as PQ-Noise and Signal, 1CCA security (similar to CCA security except that the adversary is restricted to one single decapsulation query) is required. However, no specific scheme has been designed to specifically achieve 1CCA security (excluding the schemes that aim to achieve CCA security, as they inherently encompass 1CCA security).

In this paper, we propose CUKEM, a concise and unified hybrid KEM framework built directly on PKEs, and its variant CUKEM+, which achieves CCA security by replacing one PKE component with a nominal group. We prove that our schemes, equipped with different modules, achieve standard security notions in both the random oracle model and the quantum random oracle model, including IND-CPA, IND-1CCA, and IND-CCA. Compared to existing KEM-based constructions, and CUKEM+ are more concise, as they simplify or even eliminate certain hash operations without compromising security. Our evaluation shows that the CCA-secure CUKEM+ achieves encapsulation and decapsulation speedups of up to 22.28% and 16.22%, respectively, over X-Wing, while the 1CCA-secure CUKEM attains gains of up to 13.97% and 104.31%.
Image showing part 2 of abstract.
eprint.ing.bot
FrodoKEM: A CCA-Secure Learning With Errors Key Encapsulation Mechanism (Lewis Glabush, Patrick Longa, Michael Naehrig, Chris Peikert, Douglas Stebila, Fernando Virdia) ia.cr/2025/1861
Abstract. Large-scale quantum computers capable of implementing Shor’s algorithm pose a significant threat to the security of the most widely used public-key cryptographic schemes. This risk has motivated substantial efforts by standards bodies and government agencies to identify and standardize quantum-safe cryptographic systems. Among the proposed solutions, lattice-based cryptography has emerged as the foundation for some of the most promising protocols. This paper describes FrodoKEM, a family of conservative key-encapsulation mechanisms (KEMs) whose security is based on generic, “unstructured” lattices. FrodoKEM is proposed as an alternative to the more efficient lattice schemes that utilize algebraically structured lattices, such as the recently standardized ML-KEM scheme. By relying on generic lattices, FrodoKEM minimizes the potential for future attacks that exploit algebraic structures while enabling simple and compact implementations. Our plain C implementations demonstrate that, despite its conservative design and parameterization, FrodoKEM remains practical. For instance, the full protocol at NIST security level 1 runs in approximately 0.97 ms on a server-class processor, and 4.98 ms on a smartphone-class processor. FrodoKEM obtains (single-target) IND-CCA security using a variant of the Fujisaki-Okamoto transform, applied to an underlying public-key encryption scheme called FrodoPKE. In addition, using a new tool called the Salted Fujisaki-Okamoto (SFO) transform, FrodoKEM is also shown to tightly achieve multi-target security, without increasing the FrodoPKE message length and with a negligible performance impact, based on the multi-target IND-CPA security of FrodoPKE.
Image showing part 2 of abstract.
eprint.ing.bot
On the generalized Schönhage-type bound (Theophilus Agama) ia.cr/2025/1860
Abstract. We prove an extension of the lower bound due to Sch"onhage on addition chains.
eprint.ing.bot
qt-Pegasis: Simpler and Faster Effective Class Group Actions (Pierrick Dartois, Jonathan Komada Eriksen, Riccardo Invernizzi, Frederik Vercauteren) ia.cr/2025/1859
Abstract. In this paper, we revisit the recent PEGASIS algorithm that computes an effective group action of the class group of any imaginary quadratic order R on a set of supersingular elliptic curves primitively oriented by R. Although PEGASIS was the first algorithm showing the practicality of computing unrestricted class group actions at higher security levels, it is complicated and prone to failures, which leads to many rerandomizations.

In this work, we present a new algorithm, qt-Pegasis, which is much simpler, but at the same time faster and removes the need for rerandomization of the ideal we want to act with, since it never fails. It leverages the main technique of the recent qlapoti approach. However, qlapoti solves a norm equation in a quaternion algebra, which corresponds to the full endomorphism ring of a supersingular elliptic curve. We show that the algorithm still applies in the quadratic setting, by embedding the quadratic ideal into a quaternion ideal using a technique similar to the one applied in KLaPoTi. This way, we can reinterpret the output of qlapoti as four equivalent quadratic ideals, instead of two equivalent quaternion ideals. We then show how to construct a Clapoti-like diagram in dimension 2, which embeds the action of the ideal in a 4-dimensional isogeny.

We implemented our qt-Pegasis algorithm in SageMath for the CSURF group action, and we achieve a speedup over PEGASIS of 1.8× for the 500-bit parameters and 2.6× for the 4000-bit parameters.
Image showing part 2 of abstract.
eprint.ing.bot
Testing Security Equivalence in the Random Probing Model (Anna Guinet, Carina Graw, Lukas Koletzko, Jan Richter-Brockmann, Holger Dette, Tim Güneysu) ia.cr/2025/1858
Abstract. The random probing model is a theoretical model that abstracts the physical leakage of an embedded device running a cryptographic scheme with more realistic assumptions compared to the threshold probing model. It assumes that the wires of the target device leak their assigned values with probability p, and the said values may reveal information about secret data, which could lead to a security violation. From that, we can compute the probability ϵ that a side-channel adversary may learn secret data from any random combination of wires as a function of the number of wire combinations that breaches security with rate p. This model is used to evaluate the security of masked cryptographic implementations, or simply named circuits; and the research community has been focusing so far on approximating or estimating the probability ϵ for one circuit. Yet, no proposition has been made to quickly compare the probability ϵ of different circuits, e.g., a circuit and its optimized version. In this context, we present two statistical tests to make decisions about the level of security in the random probing model: the equivalence test compares the security of two circuits in terms of ϵ’s and the superiority test decides whether the undetermined ϵ of one circuit falls below a security threshold ϵ₀, both with quantified uncertainty about the computations of the probabilities ϵ’s. The validity of these tests is proven mathematically sound and verified via empirical studies on small masked S-boxes.
Image showing part 2 of abstract.
eprint.ing.bot
On the Quantum Equivalence between S|LWE⟩ and ISIS (André Chailloux, Paul Hermouet) ia.cr/2025/1857
Abstract. Chen, Liu, and Zhandry [CLZ22] introduced the problems S|LWE⟩ and C|LWE⟩ as quantum analogues of the Learning with Errors problem, designed to construct quantum algorithms for the Inhomogeneous Short Integer Solution (ISIS) problem. Several later works have used this framework for constructing new quantum algorithms in specific cases. However, the general relation between all these problems is still unknown. In this paper, we investigate the equivalence between S|LWE⟩ and ISIS. We present the first fully generic reduction from ISIS to S|LWE⟩, valid even in the presence of errors in the underlying algorithms. We then explore the reverse direction, introducing an inhomogeneous variant of C|LWE⟩, denoted IC|LWE⟩, and show that IC|LWE⟩ reduces to S|LWE⟩. Finally, we prove that, under certain recoverability conditions, an algorithm for ISIS can be transformed into one for S|LWE⟩. We instantiate this reverse reduction by tweaking a known algorithm for (I)SIS∞ in order to construct quantum algorithm for S|LWE⟩ when the alphabet size q is a small power of 2, recovering some results of Bai et al. [BJK+ 25]. Our results thus clarify the landscape of reductions between S|LWE⟩ and ISIS, and we show both their strong connection as well as the remaining barriers for showing full equivalence.
Image showing part 2 of abstract.
eprint.ing.bot
Optimal Good-Case Latency for Sleepy Consensus (Yuval Efron, Joachim Neu, Ling Ren, Ertem Nusret Tas) ia.cr/2025/1856
Abstract. In the context of Byzantine consensus problems such as Byzantine broadcast (BB) and Byzantine agreement (BA), the good-case setting aims to study the minimal possible latency of a BB or BA protocol under certain favorable conditions, namely the designated leader being correct (for BB), or all parties having the same input value (for BA). We provide a full characterization of the feasibility and impossibility of good-case latency, for both BA and BB, in the synchronous sleepy model. Surprisingly to us, we find irrational resilience thresholds emerging: 2-round good-case BB is possible if and only if at all times, at least $\frac{1}{\varphi} \approx 0.618$ fraction of the active parties are correct, where $\varphi = \frac{1+\sqrt{5}}{2} \approx 1.618$ is the golden ratio; 1-round good-case BA is possible if and only if at least $\frac{1}{\sqrt{2}} \approx 0.707$ fraction of the active parties are correct.
eprint.ing.bot
Less is More: On Copy Complexity in Quantum Cryptography (Prabhanjan Ananth, Eli Goldin) ia.cr/2025/1855
Abstract. Quantum cryptographic definitions are often sensitive to the number of copies of the cryptographic states revealed to an adversary. Making definitional changes to the number of copies accessible to an adversary can drastically affect various aspects including the computational hardness, feasibility, and applicability of the resulting cryptographic scheme. This phenomenon appears in many places in quantum cryptography, including quantum pseudorandomness and unclonable cryptography.

To address this, we present a generic approach to boost single-copy security to multi-copy security and apply this approach to many settings. As a consequence, we obtain the following new results: • One-copy stretch pseudorandom state generators (under mild assumptions) imply the existence of t-copy stretch pseudorandom state generators, for any fixed polynomial t. • One-query pseudorandom unitaries with short keys (under mild assumptions) imply the existence of t-query pseudorandom unitaries with short keys, for any fixed polynomial t. • Assuming indistinguishability obfuscation and other standard cryptographic assumptions, there exist identical-copy secure unclonable primitives such as public-key quantum money and quantum copy-protection.
Image showing part 2 of abstract.
eprint.ing.bot
Credential Revocation Assisted by a Covertly Corrupted Server (Alisa Pankova, Jelizaveta Vakarjuk) ia.cr/2025/1854
Abstract. European Digital Identity (EUDI) Wallet aims to provide end users with a way to get attested credentials from issuers, and present them to different relying parties. An important property mentioned in the regulatory frameworks is the possibility to revoke a previously issued credential. While it is possible to issue a short-lived credential, in some cases it may be inconvenient, and a separate revocation service which allows to revoke a credential at any time may be necessary.

In this work, we propose a full end-to-end description of a generic credential revocation system, which technically relies on a single server and secure transmission channels between parties. We prove security of the proposed revocation functionality in the universal composability model, and estimate its efficiency based on a proof-of-concept implementation.
eprint.ing.bot
Compact, Efficient and CCA-Secure Updatable Encryption from Isogenies (Antonin Leroux, Maxime Roméas) ia.cr/2025/1853
Abstract. Updatable Encryption (UE) allows ciphertexts to be updated under new keys without decryption, enabling efficient key rotation. Constructing post-quantum UE with strong security guarantees is challenging: the only known CCA-secure scheme, COM-UE, uses bitwise encryption, resulting in large ciphertexts and high computational costs.

We introduce DINE, a CCA-secure, isogeny-based post-quantum UE scheme that is both compact and efficient. Each encryption, decryption, or update requires only a few power-of-2 isogeny computations in dimension 2 to encrypt 28B messages, yielding 320B ciphertexts and 224B update tokens at NIST security level 1—significantly smaller than prior constructions. Our full C implementation demonstrates practical performances: updates in 7ms, encryptions in 48ms, and decryptions in 86ms.

Our design builds on recent advances in isogeny-based cryptography, combining high-dimensional isogeny representations with the Deuring correspondence. We also introduce new algorithms for the Deuring correspondence which may be of independent interest. Moreover, the security of our scheme relies on new problems that might open interesting perspectives in isogeny-based cryptography.
Image showing part 2 of abstract.
eprint.ing.bot
A Gaussian Leftover Hash Lemma for Modules over Number Fields (Martin R. Albrecht, Joël Felderhoff, Russell W. F. Lai, Oleksandra Lapiha, Ivy K. Y. Woo) ia.cr/2025/1852
Abstract. Leftover Hash Lemma (LHL) states that X ⋅ v for a Gaussian v is an essentially independent Gaussian sample. It has seen numerous applications in cryptography for hiding sensitive distributions of v. We generalise the Gaussian LHL initially stated over ℤ by Agrawal, Gentry, Halevi, and Sahai (2013) to modules over number fields. Our results have a sub-linear dependency on the degree of the number field and require only polynomial norm growth: ∥v∥/∥X∥. To this end, we also prove when X is surjective (assuming the Generalised Riemann Hypothesis) and give bounds on the smoothing parameter of the kernel of X. We also establish when the resulting distribution is independent of the geometry of X and establish the hardness of the k-SIS and k-LWE problems over modules (k-MSIS/k-MLWE) based on the hardness of SIS and LWE over modules (MSIS/MLWE) respectively, which was assumed without proof in prior works.
eprint.ing.bot
Locally Recoverable Data Availability Sampling (Seunghyun Cho, Eunyoung Seo, Young-Sik Kim) ia.cr/2025/1851
Abstract. We propose Locally Recoverable Data Availability Sampling (LR-DAS), which upgrades binary, threshold-based availability to graded verification by leveraging optimal locally recoverable codes (e.g., Tamo-Barg). Local groups of size r + α serve as atomic certification units: once r verified openings fix a degree- < r local polynomial, the entire group is certified and accumulates monotonically toward global availability. We formalize a locality-aware commitment with a single algebraic local-global link that binds every accepted local proof to a unique global codeword, preventing cross-group splicing. Our verifier admits a two-tier IOP view (local RS-membership, global TB-proximity, one DEEP-style linking query). We instantiate this with (i) a two-layer KZG design and (ii) a transparent FRI/IOPP stack. Both support batched multi-point openings and cross-block random-weight aggregation, yielding 𝒪(1) verifier work per certified batch with 𝒪(r + α) field payload per block. Security is captured by graded soundness against missing-fraction and missing-group adversaries with explicit overshoot bounds. A lightweight proof-of-custody layer—one unpredictable global opening at publish time plus periodic batched local checks—composes seamlessly to enforce possession without altering the core pipeline. Empirically and analytically, LR-DAS certifies availability with fewer samples than required for global recovery under the same encoding, providing a practical univariate alternative to multivariate repair-based DAS while retaining succinct proofs and a simple prover/verifier pipeline. Design levers (r, α) allow tuning responsiveness versus distance, and the transparent instantiation offers a post-quantum-ready option.
Image showing part 2 of abstract.
eprint.ing.bot
Linear*-Time Permutation Check (Benedikt Bünz, Jessica Chen, Zachary DeStefano) ia.cr/2025/1850
Abstract. Permutation and lookup arguments are at the core of most deployed SNARK protocols today. Most modern techniques for performing them require a grand product check. This requires either committing to large field elements (E.g. in Plonk) or using GKR (E.g. in Spartan) which has worse verifier cost and proof size. Sadly, both have a soundness error that grows linearly with the input size.

We present two permutation arguments that have polylog(n)/|𝔽| soundness error – for reasonable input size n = 2³² and field size |𝔽| = 2¹²⁸, the soundness error improves significantly from 2⁻⁹⁶ to 2⁻¹²⁰. Moreover, the arguments achieve log (n) verification cost and proof size without ever needing to commit to anything beyond the witness. BiPerm only requires the prover to perform O(n) field operations on top of committing to the witness, but at the cost of limiting the choices of PCS. We show a stronger construction, MulPerm, which has no restriction on the PCS choice and its prover performs essentially linear field operations, $n\cdot \tilde O(\sqrt{\log(n)})$.

Our permutation arguments generalize to lookups. We demonstrate how our arguments can be used to improve SNARK systems such as HyperPlonk and Spartan, and build a GKR-based protocol for proving non-uniform circuits.
Image showing part 2 of abstract.
eprint.ing.bot
CoBBl: Dynamic constraint generation for SNARKs (Kunming Jiang, Fraser Brown, Riad S. Wahby) ia.cr/2025/1849
Abstract. General-purpose probabilistic proof systems operate on programs expressed as systems of arithmetic constraints—an unfriendly representation. There are two broad approaches in the literature to turning friendlier, high-level programs into constraints suitable for proof systems: direct translation and CPU emulation. Direct translators compile a program into highly optimized constraints; unfortunately, this process requires expressing all possible paths through the program, which results in compile times that scale with the program’s runtime rather than its size. In addition, the prover must pay the cost of every possible program path, even those untaken by a given input. In contrast, CPU emulators don’t compile programs to constraints; instead, they “execute” those programs, expressed as CPU instructions, on a CPU emulator that is itself expressed as constraints. As a result, this approach can’t perform powerful, program-specific op- timizations, and may require thousands of constraints where direct translation could use a clever handful. Worse, CPU emulators inherit an impractically expensive program state representation from the CPUs they emulate. This paper presents a compiler and proof system, CoBBl, that combines the benefits of CPU emulation and direct translation: it takes advantage of program-specific optimizations, but doesn’t pay for an unnecessary state representation or unexecuted computation. COBBL outper- forms CirC, a state-of-the-art direct translator, by 1–30× on compile time and 26–350× on prover time, and outperforms Jolt, a state-of-the-art CPU emulator, on prover time by 1.1–1.8× on Jolt-friendly benchmarks, and up to 100× on other benchmarks.
Image showing part 2 of abstract.
eprint.ing.bot
Revisiting Lattice-based Non-interactive Blind Signature (Anindya Ganguly, Angshuman Karmakar, Suparna Kundu, Debranjan Pal, Sumanta Sarkar) ia.cr/2025/1848
Abstract. Blind signatures (BS) allow a signer to produce a valid signature on a message without learning the message itself. They have niche applications in privacy-preserving protocols such as digital cash and electronic voting. Non-interactive blind signatures (NIBS) remove the need for interaction between the signer and the user. In the post-quantum era, lattice-based NIBS schemes are studied as candidates for long-term security.

In Asiacrypt 2024, Baldimtsi et al. proposed the first lattice-based NIBS construction, whose security relies on the random one-more inhomogeneous short integer solution (rOM-ISIS) assumption. This rOM-ISIS is considered to be a non-standard assumption. Later, Zhang et al. introduced another lattice-based construction in ProvSec 2024, and proved its security under the standard module short integer solution (MSIS) assumption. We analyse the security of the latter scheme. In the random oracle model, we show that it fails to achieve both nonce blindness and receiver blindness. We present explicit attacks where an adversary breaks both properties with probability~1. Our attack is based on a crucial observation that uncovers a flaw in the design. Specifically, this flaw allows an attacker to link a message-signature pair with its presignature-nonce pair. In addition, we also identify a flaw in the unforgeability proof. Finally, we suggest a modification to address the issue, which is similar to Baldimtsi et al. construction, and its security relies again on the non-standard rOM-ISIS assumption. This work again raises the question of the feasibility of achieving NIBS from standard assumptions.
Image showing part 2 of abstract.
eprint.ing.bot
Security Analysis of Privately Verifiable Privacy Pass (Konrad Hanff, Anja Lehmann, Cavit Özbay) ia.cr/2025/1847
Abstract. Privacy Pass is an anonymous authentication protocol which was initially designed by Davidson et al. (PETS’18) to reduce the number of CAPTCHAs that TOR users must solve. It issues single-use authentication tokens with anonymous and unlinkable redemption guarantees. The issuer and verifier of the protocol share a symmetric key, and tokens are privately verifiable. The protocol has sparked interest from both academia and industry, which led to an Internet Engineering Task Force (IETF) standard. While Davidson et al. formally analyzed the original protocol, the IETF standard introduces several changes to their protocol. Thus, the standardized version’s formal security remains unexamined. We fill this gap by analyzing the IETF standard’s privately verifiable Privacy Pass protocol.

In particular, there are two main discrepancies between the analyzed and standardized version: First, the IETF version introduces a redemption context, that can be used for blindly embedding a validity period into the Privacy Pass tokens. We show that this variant has significant differences to public metadata extension that has been proposed for the same purpose in the literature. Redemption context offers better privacy and security than public metadata. We capture both stronger guarantees through game-based security definitions and show that the currently considered one-more unforgeability notion for Privacy Pass is insufficient when a redemption context is used. Thus, we propose a new property, targeted context unforgeability, and prove its incomparability to one-more unforgeability. Second, Davidson et al. focused on a concrete Diffie-Hellman based construction, whereas the IETF version is built generically from a verifiable oblivious pseudorandom function (VOPRF). Further, the analyzed protocol omitted the full redemption phase needed to prevent double-spending. We prove that the generic IETF construction satisfies the desired security and privacy guarantees covering the full life-cycle of tokens. Our analysis relies on natural security properties of VOPRFs, providing compatibility with any secure VOPRF instantiation. This enables crypto agility, e.g., allowing to switch to efficient quantum-safe VOPRFs when they become available.
Image showing part 2 of abstract. Image showing part 3 of abstract.
eprint.ing.bot
The Order of Hashing in Fiat-Shamir Schemes (Barbara Jiabao Benedikt, Marc Fischlin) ia.cr/2025/1846
Abstract. Fiat-Shamir signatures replace the challenge in interactive identification schemes by a hash value over the commitment, the message, and possibly the signer’s public key. This construction paradigm is well known and widely used in cryptography, for example, for Schnorr signatures and CRYSTALS-Dilithium. There is no “general recipe” for constructing Fiat-Shamir signatures regarding the inputs and their order for the hash computation, though, since the hash function is usually modeled as a monolithic random oracle. In practice, however, the hash function is based on the Merkle-Damgård or the sponge design.

Our work investigates whether there are advisable or imprudent input orders for hashing in Fiat-Shamir signatures. We examine Fiat-Shamir signatures with plain and nested hashing using Merkle-Damgård or sponge-based hash functions. We analyze these constructions in both classical and quantum settings. As part of our investigations, we introduce new security properties following the idea of quantum-annoyance of Eaton and Stebila (PQCrypto 2021), called annoyance for user exposure and signature forgeries. These properties ensure that an adversary against the hash function cannot gain a significant advantage when attempting to extend a successful attack on a single signature forgery to multiple users or to multiple forgeries of a single user. Instead, the adversary must create extra forgeries from scratch. Based on our analysis, we derive a simple rule: When using Fiat-Shamir signatures, one should hash the commitment before the message; all other inputs may be ordered arbitrarily.
Image showing part 2 of abstract.
eprint.ing.bot
HE-based On-the-Fly MPC, Revisited: Universal Composability, Approximate and Imperfect Computation, Circuit Privacy (Ganyuan Cao, Sylvain Chatel, Christian Knabenhans) ia.cr/2025/1845
Abstract. On-the-fly multi-party computation (MPC), introduced by López-Alt, Tromer, and Vaikuntanathan (STOC 2012), enables clients to dynamically join a computation without remaining continuously online. Yet, the original proposal suffers from substantial efficiency and expressivity limitations hindering practical deployments. Even though various techniques have been proposed to mitigate these shortcomings, seeing on-the-fly MPC as a combination of independent building blocks jeopardizes the security of the original model.

Thus, we revisit on-the-fly MPC in light of recent advances and extend its formal framework to incorporate efficiency and expressivity improvements. Our approach is built around (MGHE), which generalizes threshold and multi-key HE and serves as the core primitive for on-the-fly MPC. Our contributions are fourfold: i) We propose new security notions for MGHE (e.g., IND-CPA with partial decryption, circuit privacy) and justify their suitability to the on-the-fly MPC. ii) We present the first ideal functionality for MGHE in the Universal Composability (UC) framework and characterize the conditions under which it can be realized, via reductions to our proposed security notions. iii) We present a generic protocol that securely realizes our on-the-fly MPC functionality against a semi-malicious adversary from our MGHE functionality. iv) Finally, we provide two generic compilers that lift these protocols to withstand a fully malicious adversary by leveraging zero-knowledge arguments.

Our analysis in the UC framework enables modular protocol analysis, where more efficient schemes can be seamlessly substituted as long as they meet the required security defined by the functionalities, retaining the security guarantees offered by the original construction.
Image showing part 2 of abstract.
eprint.ing.bot
Bird of Prey: Practical Signature Combiners Preserving Strong Unforgeability (Jonas Janneck) ia.cr/2025/1844
Abstract. Following the announcement of the first winners of the NIST post-quantum cryptography standardization process in 2022, cryptographic protocols are now undergoing migration to the newly standardized schemes. In most cases, this transition is realized through a hybrid approach, in which algorithms based on classical hardness assumptions, such as the discrete logarithm problem, are combined with post-quantum algorithms that rely on quantum-resistant assumptions, such as the Short Integer Solution (SIS) problem. A combiner for signature schemes can be obtained by simply concatenating the signatures of both schemes. This construction preserves unforgeability of the underlying schemes; however, it does not extend to stronger notions, such as strong unforgeability. Several applications, including authenticated key exchange and secure messaging, inherently require strong unforgeability, yet no existing combiner is known to achieve this property. This work introduces three practical combiners that preserve strong unforgeability and all BUFF (beyond unforgeability features) properties. Each combiner is tailored to a specific class of classical signature schemes capturing all broadly used schemes that are strongly unforgeable. Remarkably, all combiners can be instantiated with any post-quantum signature scheme in a black-box way making deployment practical and significantly less error prone. The proposed solutions are further highly efficient and have signatures that are at most the size of the (insecure) concatenation combiner. For instance, our most efficient combiner enables the combination of EdDSA with ML-DSA, yielding a signature size that is smaller than the sum of an individual EdDSA signature and an individual ML-DSA signature. Additionally, we identify a novel signature property that we call random-message validity and show that it can be used to replace the BUFF transform with the more efficient Pornin-Stern transform. The notion may be of independent interest.
Image showing part 2 of abstract.
eprint.ing.bot
Efficiency Improvements for Signal’s Handshake Protocol (Barbara Jiabao Benedikt, Sebastian Clermon, Marc Fischlin, Tobias Schmalz) ia.cr/2025/1843
Abstract. Signal’s handshake protocol non-interactively generates a shared key between two parties for secure communication. The underlying protocol X3DH, on which the post-quantum hybrid successor, PQXDH, builds, computes three to four individual Diffie-Hellman (DH) keys by combining the long-term identity keys and the ephemeral secrets of the two parties. Each of these DH operations serves a different purpose, either to authenticate the derived key or to provide forward secrecy.

We present here an improved protocol for X3DH, which we call MuDH, and an improved protocol for PQXDH, pq-MuDH. Instead of computing the individual DH keys, we run a single multi-valued DH operation for integrating all contributions simultaneously into a single DH key. Our approach is based on techniques from batch verification (Bellare et al., Eurocrypt 1998), where one randomizes each contribution of the individual keys to get a secure scheme.

The solution results in execution times of roughly 60% of the original protocol, both in theory and in our benchmarks on mobile and desktop platforms. Our modifications are confined to the key derivation step, both Signal’s server infrastructure for public key retrieval and the message flow remain unchanged.
Image showing part 2 of abstract.
eprint.ing.bot
Collusion-Resistant Quantum Secure Key Leasing Beyond Decryption (Fuyuki Kitagawa, Ryo Nishimaki, Nikhil Pappu) ia.cr/2025/1842
Abstract. Secure key leasing (SKL) enables the holder of a secret key for a cryptographic function to temporarily lease the key using quantum information. Later, the recipient can produce a deletion certificate—a proof that they no longer have access to the secret key. The security guarantee ensures that even a malicious recipient cannot continue to evaluate the function, after producing a valid deletion certificate.

Most prior work considers an adversarial recipient that obtains a single leased key, which is insufficient for many applications. In the more realistic collusion-resistant setting, security must hold even when polynomially many keys are leased (and subsequently deleted). However, achieving collusion-resistant SKL from standard assumptions remains poorly understood, especially for functionalities beyond decryption.

We improve upon this situation by introducing new pathways for constructing collusion-resistant SKL. Our main contributions are as follows:

-   A generalization of quantum-secure collusion-resistant traitor tracing called multi-level traitor tracing (MLTT), and a compiler that transforms an MLTT scheme for a primitive X into a collusion-resistant SKL scheme for primitive X.

-   The first bounded collusion-resistant SKL scheme for PRFs, assuming LWE.

-   A compiler that upgrades any single-key secure SKL scheme for digital signatures into one with unbounded collusion-resistance, assuming OWFs.

-   A compiler that upgrades collusion-resistant SKL schemes with classical certificates to ones having verification-query resilience, assuming OWFs.
Image showing part 2 of abstract.