Quantcast
Channel: malb::blog
Viewing all articles
Browse latest Browse all 13

Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable

$
0
0

Our paper (with Kamil Doruk Gur) “Verifiable Oblivious Pseudorandom Functions from Lattices: Practical-ish and Thresholdisable” is now available on ePrint and will appear at Asiacrypt 2024. Doruk and I started working on this together when he did his residency at SandboxAQ.

Here’s the abstract:

We revisit the lattice-based verifiable oblivious PRF construction from PKC’21 and remove or mitigate its central three sources of inefficiency. First, applying Rényi divergence arguments, we eliminate one superpolynomial factor from the ciphertext modulus q, allowing us to reduce the overall bandwidth consumed by RLWE samples by about a factor of four. This necessitates us introducing intermediate unpredictability notions to argue PRF security of the final output in the Random Oracle model. Second, we remove the reliance on the 1D-SIS assumption, which reduces another superpolynomial factor, albeit to a factor that is still superpolynomial. Third, by applying the state-of-the-art in zero-knowledge proofs for lattice statements, we achieve a reduction in bandwidth of several orders of magnitude for this material. Finally, we give a t-out-of-n threshold variant of the VOPRF for constant t and with trusted setup, based on a n-out-of-n distributed variant of the VOPRF (and without trusted setup).

The TL;DR is that we get a (plausibly) post-quantum (V)OPRF following the blueprint of [PKC:ADDS21] with about 100KB bandwidth cost for offline communication (once) and 200KB bandwidth cost for online communication (per query), if you’re happy with \lambda \approx 100. Note that we could amortise the online cost to roughly 100KB per query if several queries are sent in parallel, but we do not cost this in our work.

To explain what we did, let us begin with a high-level overview of the construction from [PKC:ADDS21] and highlight its main bottlenecks. The VOPRF construction is based on the ring instantiation of the PRF by Banerjee and Peikert [C:BanPei14]

F_k(x) = \lfloor{\frac{p}{q}\cdot \mathbf{a}^F(x) \cdot k}\rceil

where k \in \mathcal{R}_q is the key with small coefficients represented in \{-q/2,\dots,q/2\} and \mathbf{a}^F(x) is essentially a hash function processing the client input x. Security of the construction can be reduced to the hardness of RLWE. The construction in [PKC:ADDS21] instantiates this framework with uniformly random public vectors \mathbf{a}_0,\mathbf{a}_1 \in \mathcal{R}_q^{1 \times \ell} and a bit decomposition function G^{-1}. Given a public \mathbf{a} \in \mathcal{R}_q^{1 \times \ell} the high-level protocol is then:

  1. The server publishes a commitment \mathbf{c} := \mathbf{a} \cdot k + \mathbf{e} to a small key k \in \mathcal{R}.
  2. For input x, the client chooses a small s \in \mathcal{R} and \mathbf{e}_{\mathsf{C}} \in \mathcal{R}^{1 \times \ell}, and computes \mathbf{c}_x := \mathbf{a} \cdot s + \mathbf{e}_{\mathsf{C}} + \mathbf{a}^F(x) \bmod q.
  3. Using k, the server sends \mathbf{d}_x := \mathbf{c}_x \cdot k + \mathbf{e}_{\mathsf{S}} \bmod q for small \mathbf{e}_{\mathsf{S}} \in \mathcal{R}^{1 \times \ell}.
  4. The client finally outputs \mathbf{y} = \lfloor{\frac{p}{q} \cdot (\mathbf{d}_x - \mathbf{c}\cdot s)}\rceil.

Since \mathbf{d}_x = \mathbf{a} \cdot s \cdot k + \mathbf{a}^F(x) \cdot k + \mathbf{e}_{\mathsf{C}} \cdot k + \mathbf{e}_{\mathsf{S}}, if \mathbf{e}_{\mathsf{S}} is chosen from a distribution that hides the presence of additive terms \mathbf{e}_{\mathsf{C}} \cdot k, \mathbf{e} \cdot s and the absence of the additive term \mathbf{e}_x (which follow some narrow distribution \varepsilon_{\mathbf{a}_0,\mathbf{a}_1,x,\sigma}) then it is indistinguishable from \mathbf{d}_x' = (\mathbf{a} \cdot k + \mathbf{e}) \cdot s + \mathbf{e}_{\mathsf{S}} + (\mathbf{a}^F(x)\cdot k + \mathbf{e}_x) = \mathbf{c}\cdot s + (\mathbf{a}^F(x)\cdot k + \mathbf{e}_x) + \mathbf{e}_{\mathsf{S}}. Then if \mathbf{e}_x is chosen from a proper distribution [C:BanPei14], \mathbf{a}^F(x)\cdot k + \mathbf{e}_x and consequently \mathbf{d}_x leaks nothing about k by the RLWE assumption. Similarly, if s chosen from a proper RLWE secret distribution and \mathbf{e} is from a discrete Gaussian, the client message \mathbf{c}_x = \mathbf{a} \cdot s + \mathbf{e} + \mathbf{a}^F(x) is also indistinguishable from uniform by RLWE.

Correctness is satisfied with high probability regardless of the choice of k by the one-dimensional short integer solution (1D-SIS) assumption [TCC:BraVai15]. Verifiability is then achieved with the help of non-interactive zero-knowledge arguments of knowledge showing \mathbf{c},\mathbf{c}_x, and \mathbf{d}_x are computed correctly.

The above construction is intuitive in following well-established pre-quantum Diffie-Hellmann blueprints. Moreover, its simple algebraic nature (and instantiation in the standard model, except potentially for zero-knowledge proofs) allows for extensions such as threshold variants.

However, the concrete instantiation is highly inefficient due to three reasons.

First, the correctness of the PRF adds a superpolynomial factor to the modulus q to ensure correct rounding which in the end results in large parameters. Indeed, to thwart adversaries that maliciously sample k such that \mathbf{a}^{F}(x) \cdot k produces a rounding error for a target value x, [PKC:ADDS21] relies on the 1D-SIS assumption as just mentioned. This assumption requires q \gg 2^{2 \lambda}, i.e. more than what we would naively expect to have correct rounding with overwhelming probability.

Second, to hide the additive terms \mathbf{e}_{\mathsf{C}} \cdot k, \mathbf{e} \cdot s and \mathbf{e}_x, the \mathbf{e}_{\mathsf{S}} has to have superpolynomial size in the norm of these terms. This allows for an argument based on statistical distance to go through.

Third, the NIZKAoKs required for verifiability and to protect against malicious clients add further overheads as these relations require non-trivial statements. In particular, the proof that \mathbf{c}_x is correctly computed has to show \mathbf{c}_x indeed contains \mathbf{a}^F(x) without revealing the secrets x, s, or \mathbf{e}_{\mathsf{C}}. Since \mathbf{a}^F(x) is highly irregular with calls to bit decompositions and two different public vectors, [PKC:ADDS21] used the NIZKAoK construction from [C:YAZXYW19] which proves rank-1 constraints (R1CS) over \mathbb{Z}_q, breaking the native structure of the protocol. Combined with large parameters this causes bandwidth in the GBs.

So, on to what we actually did.

First, we avoid relying on the 1D-SIS assumption, by borrowing a trick from the non-interactive key exchange in [USENIX:GKQMS24]. Instead of defining the PRF output as \lfloor\frac{p}{q} \cdot ( \mathbf{a}^{F}(x) \cdot k)\rceil, we define it as \lfloor\frac{p}{q} \cdot (\mathbf{a}^{F}(x) \cdot k + \mathbf{r})\rceil where \mathbf{r} is the output of some Random Oracle called on x and \mathbf{c}: \mathbf{r} := \mathsf{H}_{\mathbf{r}}(x, \mathbf{c}). In the Random Oracle model, \mathbf{r} is independent of k and thus \lfloor\frac{p}{q} \cdot (\mathbf{a}^{F}(x) \cdot k + \mathbf{r} + \mathbf{e}_{\mathsf{C}} \cdot k + \mathbf{e}_{\mathsf{S}})\rceil will round to the correct value \lfloor\frac{p}{q} \cdot ( \mathbf{a}^{F}(x) \cdot k + \mathbf{r}) \rceil with a probability to \approx 1- \|\mathbf{e}_{\mathsf{C}} \cdot k + \mathbf{e}_{\mathsf{S}}\|_{\infty}/(q/p). This still requires a superpolynomial gap between q and \|\mathbf{e}_{\mathsf{C}} \cdot k + \mathbf{e}_{\mathsf{S}}\|_{\infty} but this gap is comparable to that in the semi-honest setting of [PKC:ADDS21].

Second, we change the way how we analyse \mathbf{e}_{\mathsf{S}} and remove the superpolynomial dependency on the norm of additive terms. To achieve this, we use a Rényi divergence based approach instead of the statistical distance. However, for this, we have to replace the simulation-based security in the standard model in [PKC:ADDS21] with a game-based notion in the Random Oracle model. In more detail, except in rather particular circumstances, we cannot apply Rényi divergence arguments to decision problems [JC:BLRSSS18]. To work around this, we first show that our construction based on [PKC:ADDS21] achieves the notion of unpredictability, which we then upgrade to PRF security (which is trivial to achieve in the ROM). Overall, this leads to a bandwidth improvement of roughly an order of magnitude when compared with the semi-honest parameters of [PKC:ADDS21] (and without NIZKAoK).

Third, we replace the NIZKAoK [C:YAZXYW19] with that from [C:LyuNguPla22] compressed with LaBRADOR [C:BeuSei23] and also work in larger rings \mathcal{R}_{q} with lattice statements. This improves bandwidth by several orders of magnitude.

Compared with [PKC:ADDS21], our work allows for practical-ish parameters. Compared with [EC:ADDG24], our bandwidth requirements are smaller if few evaluations are required. In terms of computational burden, note that [EC:ADDG24] has an expensive computation on the server side (TFHE bootstrapping) whereas we have an expensive computation on the client side (proving well-formedness with a complex statement). I’d say neither of these two constructions deserves to be called practical, for their computational burden alone.

Finally, we extend the functionality of the VOPRF and build multiparty protocols. We use n-out-of-n and t-out-of-n threshold VOPRFs which consist of n servers jointly evaluating the input x and n (respectively t) servers are required to generate the output. The n-out-of-n construction is immediate from the key-homomorphic properties of the VOPRF. To achieve the more interesting t-out-of-n setting, we exploit that in the VOPRF setting, we expect t to be quite small, i.e. constant. Moreover, we assume a trusted setup. While this is a significant limitation of this work, we think this assumption is justified in the VOPRF setting, where one entity may aim to avoid single points of failure, rather than multiple parties coming together to, say, validate some statement, i.e. the threshold signature setting. In our approach, we essentially output \binom{n}{t} copies of the n-out-of-n setting. We use rejection sampling to enforce that these are all well-distributed. To achieve verifiability in the t-out-of-n case we rely on an additional cut-and-choose type argument to be able to use weaker NIZKAoKs.


Viewing all articles
Browse latest Browse all 13

Trending Articles