https://infossm.github.io/blog/2023/07/14/Monolith/

 

Hash Functions Monolith for ZK Applications: May the Speed of SHA-3 be With You

이 내용은 https://eprint.iacr.org/2023/1025.pdf 의 요약입니다. ZK Friendly Hash Function의 필요성 해시함수는 굉장히 많은 곳에서 사용되고 있습니다. 그런만큼 ZKP 상에서도 해시함수의 계산에 대한 증명을

infossm.github.io

 

Introduction

I participated in the first cohort of the Axiom Open Source Program. After studying and being fascintated with the theory of ZK-friendly hashes, I decided that I will implement some of them for this program. My target for implementation was the newly (at the time) developed Poseidon2 hash function. After a while, I kept thinking about what to do next, whether to keep implementing more hashes or go in a different direction. At this point, my friend asked me about a interesting puzzle, and it went like this. 

Suppose a password-based key management system stores the user's key as $E(pw, K)$. Suppose the user now wants to change the password into $pw'$, so the storage should change to $E(pw', K)$. How should the system verify that this new value is still an encryption of $K$, without knowing $pw, pw', K$ at all?

 

This was a very interesting and real-world puzzle - and some search lead to the theory of verifiable encryption, where a certain property is proved over an encrypted plaintext. It's also clear that ZKP can give us a solution here. 

 

By allowing the system to store $Hash(K)$, we can change this problem to 

 Prove that the user knows $pw, K$ such that $Hash(K) = A$ and $E(pw, K) = B$ where $A, B$ are stored on the system.

 

Selecting the hash function as Poseidon2, I was left with selecting $E$ - and I decided for AES. For simplicity, I chose AES-ECB. 

I also decided that I will try to use pure halo2-lib as much as possible - this is because I already implemented Poseidon2 in halo2-lib at the time, and mixing vanilla halo2 with Axiom's halo2-lib is definitely not an easy task. 

 

Implementing Poseidon2 

To discuss the implementation aspects of Poseidon2, we need to first how Poseidon and Poseidon2 works. 

Roughly speaking, these two hash functions are based on a sponge-based construction, which means that the hash is based on a permutation. Poseidon hash has a width parameter $t$, and this means that the permutation is of $\mathbb{F}_p^t \rightarrow \mathbb{F}_p^t$. To design this permutation, Poseidon uses three types of layers - round constant addition, MDS matrix linear layer, and the SBOX layer. 

 

The round constant addition layer is straightforward - it simply adds a round constant to each element. 

The MDS matrix linear layer is also straightforward - it's a matrix multiplication. The "MDS" part is a description about the matrix which is needed for security analysis, but for implementation/understanding purposes it's not very important. 

The SBOX layer is $S(x) = x^\alpha$, where $\alpha$ is the minimum positive integer that $\gcd(\alpha, p - 1) = 1$. For BN254, we select $\alpha = 5$. 

 

The most interesting part of Poseidon is the difference of full rounds and partial rounds. The idea is that not all the rounds needs to have S-boxes to every element in the state. Instead, we can use partial rounds, which only uses the S-box for a single element in the state. By putting $R_f = R_F / 2$ full rounds, then $R_P$ partial rounds, then $R_f = R_F / 2$ full rounds, we can maintain security while saving the use of many S-boxes, leading to a more efficient hash function. The outline of this permutation is shown in a figure below. 

So what's the difference between Poseidon and Poseidon2? There are some subtle differences, but the main difference lies in the difference in the MDS matrix linear layer. The matrices are generated differently, for better native runtime and better costs in terms the ZKP. The matrix for the external full rounds and the internal partial rounds is also different. This permutation's layout is shown in the figure below.

As Poseidon is already implemented in Axiom's halo2-lib, all I needed to do was implement these differences. 

 

Grain LFSR & The Parameter Generation

The first part is the parameter generation algorithm. For Poseidon, this is implemented in halo2/primitives/poseidon

The parameters for the round constants or the matrix multiplication is generated based on Grain LFSR, and the initial values for this LFSR is with basic parameters such as $R_F, R_P$. Due to the different matrix format between Poseidon and Poseidon2, the generation algorithm itself is also quite different. I implemented the same algorithm from the Horizen Labs implementation, in their repository. 


There is one interesting part of the matrix generation algorithm that is common in both Poseidon and Poseidon2, which is the testing for the so called invariant subspace trails. The details for why this is important and how to test for it is beyond the scope of this blog post, but interested readers should dive into the literature of cryptanalysis on Poseidon. Anyways, what this means is that sometimes we need to re-generate the matrices if the generated matrix fails this check. However, implementing this in rust is quite time consuming as it deals with the computation of minimal polynomials of matrices. Therefore, I hardcoded the number of tries it takes to reach a matrix that satisfies the necessary checks. The unfortunate consequence is that this makes the implementation not fully generic, as it assumes that the field we are working on is over BN254. If there is a rust library for minimal polynomials of matrices, this can be written to be generic over any prime field. 

 

Implementation of Matrix Layers

While there are many optimization tricks in Poseidon, many of them are not relevant in Poseidon2. The main trick in Poseidon2 is that the matrices are designed to be easy to multiply, both in native computation and in the ZKP world. The overall implementation strategy was taken from the Horizen Labs implementation. These strategies are also described in the Poseidon2 paper's Appedix as well. 

The main operation used to implement these matrix layers is mul_add in the `GateInstructions`. 

 

 

Interesting Issues on the Horizen Labs Implementation

During the implementation process, I found some very interesting issues/points on the Horizen Labs implementation. This is quite awkward, as the Horizen Labs implementation is the reference implementation after all, and it is the implementation that is mentioned in the Poseidon2 paper itself. Therefore, the questions I will mention below may surve little to no purpose. With that in mind, here they are.

 

The first one is the Grain LFSR parameters. In the Poseidon parameter generation, the SBOX parameter is selected as 0 if the SBOX is of $x^\alpha$ with small positive $\alpha$ and 1 if the SBOX is $x^{-1}$. In the Poseidon2 parameter generation, it's the opposite - the SBOX parameter is 1.

 

The second one is in the plain implementation itself. In the Poseidon2 parameter generation, it's clear that the external matrix in the case $t = 4$ is simply $M_4$. However, in the plain implementation itself, it uses $2M_4$ as the external matrix. This is caused because the matrix for $t = 4t'$ with $t' \ge 2$ is with a circulant matrix $circ(2M_4, M_4, \cdots , M_4)$, and the implementation forgot to handle the case $t = 4$.

 

This issue is now fixed on the Poseidon2 repository.

 

Implementing AES

AES-ECB is, well, AES-ECB. If you look at some pure python implementations like this one, we see that we need to implement the SBOX, the byte xor operations, the "xtime" operation, and the byte range check. The remainder will be straightforward implementation. 

 

Implementing the SBOX

There are three ways to proceed here. 

- Use a lookup table of size $2^8$

- Create a SBOX table as a witness, then use Axiom's "select_from_idx"

- Implement the $GF(2^8)$ arithmetic and the affine transformation on $\mathbb{F}_2^8$

 

The third option seemed to be way too complex, so initial implementation used the second option. However, as you can expect, this is very inefficient, so a lookup table had to be used. The issue is that using pure Axiom halo2-lib and using lookup tables at the same time is quite non-trivial, especially if there are multiple tables to be used. To use a lookup table, I used the methodology from the RangeChip and the RangeCircuitBuilder - practically copy pasting everything except for the actual lookup table part. I added 0 and $256 \cdot (x + 1) + S(x)$ to the lookup table. Then, I could claim that $y = S(x)$ if $x, y$ are all within $[0, 256)$ and $256 \cdot (x + 1) + y \in T$. 

 

 

Implementing Byte XORs and "xtime"

There are two ways to continue here. 

- Again, use a lookup table

- Decompose everything as bits, then use bit XORs to implement byte operations

 

At first, I implemented in the second way. A bit xor can be implemented with a not gate and a select gate. 

 

However, I turned to using a lookup table in hopes of optimizing the circuit. I added $2^{24} + 2^{16} \cdot a + 2^8 \cdot b + a \oplus b$ to the lookup table - and with the assumption that $a, b, c \in [0, 256)$, $2^{24} + 2^{16} \cdot a + 2^8 \cdot  b + c \in T$ is enough to force $c = a \oplus b$. 

 

The same goes for the xtime operation. I added $2^{25} + 2^8 \cdot x + xtime(x)$ to the lookup table, and with the assumption that $a, b \in [0, 256)$, $2^{25} + 2^8 \cdot a + b \in T$ is enough to force $b = xtime(a)$. 

 

Implementing the Byte Range Check

There are two ways to proceed here.

- Use a lookup table

- Decompose the byte to 8 bits

 

The issue with the first approach is that we are currently using a single lookup table. Also, many checks with the lookup table so far is built on the assumption that every value is within $[0, 256)$. Therefore, performing byte checks with a lookup table (unless we somehow manage to use multiple lookup tables) leads to the danger of circular reasoning. I simply used the num_to_bits function of Axiom's halo2-lib to check that the values are within 8 bits. This is indeed quite a bit costly, and is the main further optimization that could be done. 

 

 

Final Benchmarks

Taken directly from the final presentation, we see that Poseidon2 is better in ZKP terms when the width $t$ is large. This is natural, as Poseidon2's dominant performance usually comes in native calculation, and the ZKP cost gets better when $t$ is large and the MDS matrices' special forms become more and more helpful in decreasing the cost. In a way, this benchmark agrees with the paper. 

 

In AES, we see that a single block costs around 66k cells in AES128, so around 6k per single AES round. 

If we can make multiple lookup tables possible, we can remove the 8 bit decomposition check, and get better performance.

'Cryptography' 카테고리의 다른 글

Brakedown Overview  (0) 2023.10.13
Monolith Hash Function  (0) 2023.09.30
ZK Applications  (0) 2023.03.03
Polynomials and Elliptic Curves in ZK  (0) 2023.02.27
A Hyperelliptic Curve Story  (0) 2023.02.22

https://github.com/rkm0959/rkm0959_presents/blob/main/ZKApplications.pdf

 

GitHub - rkm0959/rkm0959_presents: Presentations by rkm0959

Presentations by rkm0959. Contribute to rkm0959/rkm0959_presents development by creating an account on GitHub.

github.com

 

https://github.com/rkm0959/rkm0959_presents/blob/main/Polynomials_EllipticCurve.pdf

 

GitHub - rkm0959/rkm0959_presents: Presentations by rkm0959

Presentations by rkm0959. Contribute to rkm0959/rkm0959_presents development by creating an account on GitHub.

github.com

 

A very long story. It started when Brian Gu told me about DARK back in 2021 @theoremoon wrote the challenge "Hell" for SECCON CTF Finals 2022. It involved some hyperelliptic curves and was quite interesting. Let's look at that challenge first. 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import os
 
flag = os.environ.get("FLAG""neko{the_neko_must_fit_to_the_hyperelliptic}")
= random_prime(2 ** 512)
 
xv = randint(0, p-1)
yv = int(flag.encode().hex(), 16)
 
assert yv < p
 
= 2
PR.<x> = PolynomialRing(GF(p))
= sum(randint(0, p-1)*x**for i in range(2*+ 1 + 1))
= f + (yv**2 - f.subs({x: xv}))
 
HC = HyperellipticCurve(F, 0)
= HC.jacobian()(GF(p))
 
= J(HC((xv, yv)))
print(f"p = {p}")
for i in range(25):
    k = i*(i+1)
    print(k*D)
cs

 

Ultimately, we are given $p$ and Jacobians of $6D, 12D, 20D$. Here, we need to find the coordinates of $D$. 

To solve this, first we note that for Mumford representation $(u, v)$ we must have $v^2 \equiv f \pmod{u}$.

As this is genus 2, $\deg u$ will be $2$ and $\deg f$ will be $5$. This means that we can recover $f$ via CRT. 

 

After recovering $f$, we can compute the Mumford representation of $2D = 20D - 12D - 6D$. Here, we note that the $u(x)$ value of the Mumford representation of $2D$ will simply be $(x - xv)^2$ due to the usual formula. This recovers $xv$ and so $yv$. 


So while the challenge was fun, I thought that it didn't venture through the whole "recover $D$ from $2D$" part. Another thing was that at first, I didn't realize that $(x - xv)^2$ would be the representation of $2D$. This lead me to thinking about searching for methods to actually compute the order of the Jacobian. After some quick tries, I realized that for hyperelliptic curves of genus 2 and above, the computation of orders is quite a difficult task. You can take more looks into this in papers like https://eprint.iacr.org/2020/289.pdf

 

So the whole dividing by 2 shouldn't be very trivial. Let's think about genus 2. A reduced divisor would be of the form of $[P] - [O]$ or $[P] + [Q] - 2[O]$. We would be given the divisor multiplied by 2. We see that the $[P] - [O]$ case is the easy case presented in SECCON. How about the latter? In this case, $2[P] + 2[Q] - 4[O]$ would be presented to us. This is clearly not reduced, so we need to reduce.

 

A good explainer is presented in "Pairings For Beginners" Section 3.2. You set up a polynomial $g$ such that $g$ meets $f$ at $P$ with multiplicity $2$ and $Q$ with multiplicity $2$. This amounts to 4 constraints, so $g$ should be degree 3. Now we see something like $$g^2 - f = C (x - p)^2 (x - q)^2 (x^2 + ax + b)$$ For sake of explaining let's just say that $x^2 + ax + b$ splits and we have $$g^2 - f = (x - p)^2 (x - q)^2 (x - r) (x - s)$$ This will mean that $$2[P] + 2[Q] + [R] + [S] - 6[O]$$ is a principal divisor, so in the Jacobian, we will have $$2[P] + 2[Q] - 4[O] = 2[O] - [R] - [S]$$ which is practically now reduced. This means that $x^2 + ax + b$ will be (up to sign) the $u(x)$ of the Mumford representation of $2[P] + 2[Q] - 4[O]$, i.e. the polynomial we are already given. So basically we would be solving for something like $$g^2 - f = C (x - p)^2 (x - q)^2 u(x)$$ which looks to be relatively doable with the whole resultants and whatnot. It was, and I'll explain the further details later.

 

 


The next question for me was in dividing-by-2 in genus 3, rather than solving for dividing-by-3 in genus 2. 

You actually need to reduce two times here, so following the system we have something like $$g^2 - f = C_1(x-p)^2(x-q)^2(x-r)^2 T(x)$$ $$h^2 - f = C_2T(x) u(x)$$ so something like $$(g^2 - f) u(x) = C(h^2 - f) (x-p)^2(x-q)^2(x-r)^2$$ where $g$ is degree 5 and $h$ is degree $3$. This is quite a lot of variables, so even after optimizing as hard as I can, I couldn't get the algorithm to run in SageMath at all. In the end I gave up, and decided to ask to solve for $P$ when given $5[P] - 5[O]$. 

 

This requires a single reduction - take a $g$ of degree 4 that meets $f$ at $P$ with multiplicity 5. Then $$g^2 - f = C(x-p)^5 u(x)$$ holds, so this is a relatively manageable system that can be solved in SageMath within time. Once again, I'll explain the details later. 


Before we dive into the PBCTF challenge, let's look into the whole dividing-by-2 situation in genus 3 hyperelliptic curves.

At first I thought it would just be a cool challenge, but it turned out that it had some interesting background. 

 

It turns out that the previously noted fact that hyperelliptic curve's order is quite hard to compute had made it a candidate for a hidden order group. Hidden order groups are used in various parts of cryptography - the most common one we all know is the RSA group $\mathbb{Z}_N^\star$. There are various assumptions, (see Alin Tomescu's blog post) and various cryptographic primitives that are based on those assumptions. Some examples are VDFs (see [BBF18]) and integer-based zero knowledge proofs (see DARK [BFS19]) and so on. One popular choice for such a group is obvious - the RSA group itself, sometimes reduced to something like $QR_N / \{\pm 1\}$. However, selecting $N$ requires either a trusted third party or ridiculously large $N$ (see Sander's paper) which adds concerns. The goal now is trustless hidden order groups - and this is where class groups of imaginary quadratic fields come in. Apparently simply choosing a prime $p$ is enough - and nobody will be able to compute the order. Many papers based on hidden order groups mention class groups. 

 

Hyperelliptic curves of genus 3 and above are mentioned as candidates in Brent's paper. It was then considered by Dobson, Galbraith, and Smith - this paper does a lot of things, such as rethinking security parameters, lowering sizes of ideal class group elements. Another thing that this paper does is to speculate that hyperelliptic curves of genus 3 actually might be a good choice - for example, the paper suggests that it may offer shorter keylengths in practice. The paper was followed by a paper by Jonathan Lee, who discusses point counting algorithms on hyperelliptic curves. Further work was done by a paper by Thakur to discuss more details, such as types of curves to avoid.

 

At this point, dividing-by-2 in genus 3 curves sounded like it should be impossible. After all, the RSA equivalent of this is solving a quadratic equation modulo $N$, which is straight up just equivalent to factoring. Also, if dividing-by-2 is impossible, then by definition, Strong RSA Assumption will be broken. I believed that this would immediately hinder the usage of hyperelliptic curves as hidden order groups. I didn't know if dividing-by-2 was possible in class groups as well. It is possible, and it's even mentioned in the DARK paper, oops...

 

Anyways, that was why I was so focused on doing the dividing-by-2 in genus 3 curves - I thought it would have a serious implication. 

 


Back to the PBCTF challenge. The challenge I wanted was recovering $P, Q$ from $2[P] + 2[Q] - 4[O]$ in a genus 2 curve, and $R$ from $5[R] - 5[O]$ in a genus 3 curve. Since "hell" also had a part to recover the hyperelliptic curve equation, I decided that I should add this as well. I also wanted to make recovering $p$ as a part of the challenge. To do so, I added a point thanking @theoremoon for the nice SECCON challenge. To make the hyperelliptic curve formula recovery a bit more challenging, I bounded the coefficients heavily and gave less equations - so that lattice reductions are required. In the end, the challenge I submitted for PBCTF looked like this. It had 4 solves.

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import os 
 
flag = open("flag""rb").read()
flag = flag.lstrip(b"pbctf{").rstrip(b"}")
assert len(flag) == 192
 
while True:
    p = random_prime(1 << 513, lbound = 1 << 512)
    coefs = [int.from_bytes(os.urandom(42), "big"for _ in range(8)]
    PR.<x> = PolynomialRing(GF(p))
 
    g1, g2 = 23
    f1 = sum(coefs[i] * (x ** i) for i in range(2 * g1 + 2))
    f2 = sum(coefs[i] * (x ** i) for i in range(2 * g2 + 2))
 
    flag1 = GF(p)(int.from_bytes(flag[:64], "big"))
    flag2 = GF(p)(int.from_bytes(flag[64:128], "big"))
    flag3 = GF(p)(int.from_bytes(flag[128:], "big"))
    hint = GF(p)(int.from_bytes(b"Inspired by theoremoon's SECCON 2022 Finals Challenge - Hell. Thank you!""big"))
 
    pol1 = x * x - f1(flag1)
    pol2 = x * x - f1(flag2)
    pol3 = x * x - f2(flag3)
    pol4 = x * x - f2(hint)
 
    if len(pol1.roots()) * len(pol2.roots()) * len(pol3.roots()) * len(pol4.roots()) == 0:
        continue 
 
    HC1 = HyperellipticCurve(f1, 0)
    J1 = HC1.jacobian()(GF(p))
 
    HC2 = HyperellipticCurve(f2, 0)
    J2 = HC2.jacobian()(GF(p))
 
    P1 = HC1((flag1, pol1.roots()[0][0]))
    P2 = HC1((flag2, pol2.roots()[0][0]))
    P3 = HC2((flag3, pol3.roots()[0][0]))
    P4 = HC2((hint, pol4.roots()[0][0]))
 
    print(2 * J1(P1) + 2 * J1(P2))
    print(5 * J2(P3))
    print(J2(P4))
    break
cs

 

First, the hint and the jacobian of $P_4$ immediately gives a small product of $p$ - it can be checked that the hint string converted to integers is just a little higher than $2^{512}$. By factoring that small product, you can recover $p$. 

 

Now we move on to recovering the 8 coefficients. As in the solution of "hell", we know that $v^2 \equiv f \pmod{u}$. Since the degrees of $u$ in the three Jacobians are 2, 3, 1 respectively, this amounts to 6 linear equations on the coefficients of $f$. Therefore, the solutions will be of the form of $s + c_1 l_1 + c_2l_2$ where $c_1, c_2$ are constants and $l_1, l_2$ is in the kernel of the matrix. As the coefficients are less than $2^{336}$, a lattice reduction will find the coefficients. Notice that $336 \times 8$ is significantly less than $512 \times 6$. 

 

Exploit up to here: https://github.com/rkm0959/Cryptography_Writeups/blob/main/2023/PBCTF/remake-solution/solve.sage

 

We now move on to the real challenge - the first one, as mentioned is recovering $P, Q$ from $2[P] + 2[Q] - 4[O]$. 

Also as mentioned before, this can be reduced to solving $$g^2 - f = C_1 (x - p)^2 (x - q)^2 u(x)$$ Let's solve for this. Set $g = A + Bx + Cx^2 + Dx^3$ to get $$(A + Bx + Cx^2 + Dx^3)^2 - f = C_1(x-p)^2(x-q)^2u(x)$$ and by comparing the leading coefficient, we see that $C_1 = D^2$ so $$(A + Bx + Cx^2 + Dx^3)^2 - f = D^2(x-p)^2(x-q)^2u(x)$$ Now, for the sake of lowering degrees (in terms of $A, B, C, D$), we change this to $$(AD^{-1} + BD^{-1} x + CD^{-1} x^2 + x^3)^2 - D^{-2} f = (x-p)^2(x-q)^2u(x)$$ and re-define the variables to get $$(A + Bx + Cx^2 + x^3)^2 - D f = (x-p)^2(x-q)^2 u(x)$$ Now we will perform long-division on the LHS by $u(x)$, and add the constrain that the remainder should be zero. This will be two polynomial constraints on $A, B, C, D$. We add the fact that the result will be of the form of $$(x^2 + ax + b)^2 = x^4 + 2ax^3 + (a^2 + 2b)x^2 + 2abx + b^2$$ Since the coefficients are polynomials of $A, B, C, D$, we add constraints that the coefficients are ones of the form shown in $(x^2 + ax + b)^2$. For example, if the division result if $x^4 + c_3x^3 + c_2x^2 + c_1x + c_0$, where $c_i$s are polynomials of $A, B, C, D$, then we could set $a = c_3 / 2$ and $b = (c_2 - a^2) / 2$ then constrain that $c_1 = 2ab$ and $c_0 = b^2$. This adds two more polynomial constraints to $A, B, C, D$. Since we have four constraints and four variables, resultants will be able to recover $A, B, C, D$ with some time. 

 

Exploit: https://github.com/rkm0959/Cryptography_Writeups/blob/main/2023/PBCTF/remake-solution/solve_12.sage

 

The second challenge is recovering $R$ from $5[R] - 5[O]$. This is solving $$g^2 - f = C_1(x-r)^5 u(x)$$ Here, I actually generated the parameters so that $u$ would split into three linear factors. This made it easier to compute everything, or at least think about everything (maybe this trick works with extended fields too). For example, let's say that $t$ is a root of $u$. Then, $g$ here would actually have to pass through $(t, -v(t))$. This would make the reduction process send $5[R] - 5[O]$ to have $(t, v(t))$ in the reduced form. Therefore, we actually know 3 points that $g$ passes through, which means that we can interpolate them. Therefore, we know $g \pmod{u}$. 

 

So let's write $g = b + C_2 u(x) (x + v)$. This makes us write $$(b + C_2u(x)(x+v))^2 - f = C_1(x - r)^5 u(x)$$ and from the leading coefficients, we know $C_2^2 = C_1$. Now we can write $$(C_2^{-1} b + u(x)(x + v))^2 - C_2^{-2} f = (x-r)^5 u(x)$$ and rewriting variables, we can just solve for $$(t b + u(x)(x+v))^2 - t^2 f = (x-r)^5 u(x)$$ so there are only two variables - $t$ and $v$. We proceed similarly - divide the LHS by $u(x)$. Here, the remainder should be $0$ already, as we set $b = g \pmod{u}$ as already known. The main part is to constrain so that the result of the division is of the form $(x-r)^5$. To do so, let the result be $x^5 + c_4x^4+ \cdots + c_0$, where $c_i$s are again polynomials of $t, v$. Then, we can set $r = -c_4/5$ and constrain $c_3 = 10r^2, c_2 = -10r^3, c_1 = 5r^4, c_0 = -r^5$. This makes it possible to solve for $t, v$. 

 

Exploit: https://github.com/rkm0959/Cryptography_Writeups/blob/main/2023/PBCTF/remake-solution/solve3.sage

 


After the CTF, @isenbaev told via twitter that divide-by-2 in genus 3 hyperelliptic curves. It happens that Magma is very strong, and can compute this kind of stuff very fast. At first, I was very surprised at the fact that divide-by-2 is possible. So what now? What does this mean??

 

I went over the VDF papers and DARK paper to look at exactly what assumptions they are working with. 

 

VDFs require low order assumption or the adaptive root assumption. The latter is clearly hard, but low order assumption seemed interesing. Can we go further and consider multiplication/division by 3? I'm not sure, but it might be interesting to try. DARK paper mentions using the Strong RSA in Theorem 5. Another interesting thing was that there are papers that try to remove the Strong RSA assumption from arguments working over the integers. I still need to read that paper - it sounds very cool. 

 

After reading more, I saw that the [DGS20] paper already had a defense in mind. 

 

 

Basically, it considers the set $S$ where low-order assumption is broken, and just considers $\text{lcm}(S) G$. This is exactly the methods used to reduce the RSA group to $\mathbb{Z}_N^\star / \{\pm 1\}$ or something like that. This seems to be enough defense against low order assumption attacks.

 

What about the Strong RSA assumption side of things? I still thought that the whole dividing-by-2 things being possible was very bad, but as I mentioned before, I learned that even class groups have that property as well. Apparently it's fine. 

 

I guess that not reading the detailed proof for DARK really hurt in the end. The proof is really hard, though...

 

Anyways, this was a really fun adventure, and I'm more motivated to study now. Thanks to everyone for the discussion. 

'Cryptography' 카테고리의 다른 글

ZK Applications  (0) 2023.03.03
Polynomials and Elliptic Curves in ZK  (0) 2023.02.27
Optimal Ate Pairing, BLS12-381, BN254  (0) 2022.12.21
Elliptic Curve Pairings  (0) 2022.11.25
New Lookup Argument  (0) 2022.11.04

https://infossm.github.io/blog/2022/12/05/BLSBNMaster/

 

Optimal Ate Pairings, BLS12-381, BN254 알아보기

이전 글에서 이어서 가겠습니다. Optimal Ate Pairings [Ver09]의 내용입니다. 다시 Tate Pairing으로 돌아가서, \[(f_{s, P}) = s(P) - ([s]P) - (s-1) \mathcal{O}\] 인 함수 $f_{s, P}$를 생각하면, reduced Tate Pairing \[t_r(P, Q)

infossm.github.io

 

'Cryptography' 카테고리의 다른 글

Polynomials and Elliptic Curves in ZK  (0) 2023.02.27
A Hyperelliptic Curve Story  (0) 2023.02.22
Elliptic Curve Pairings  (0) 2022.11.25
New Lookup Argument  (0) 2022.11.04
Sum-Check Protocol and Applications  (0) 2022.10.24

https://infossm.github.io/blog/2022/11/22/PairingMaster/

 

Pairing 제대로 알아보기

자료: Pairings For Beginners Introduction 이 글에서는 여러 암호학의 분야에서 자주 등장하는 Elliptic Curve Pairing에 대해서 자세하게 알아보겠습니다. 독자 대상은 현대대수학 기본 Elliptic Curve 연산에 대

infossm.github.io

 

'Cryptography' 카테고리의 다른 글

A Hyperelliptic Curve Story  (0) 2023.02.22
Optimal Ate Pairing, BLS12-381, BN254  (0) 2022.12.21
New Lookup Argument  (0) 2022.11.04
Sum-Check Protocol and Applications  (0) 2022.10.24
Polynomial Commitment Scheme from DARK  (0) 2022.10.14

https://zkresear.ch/t/new-lookup-argument/32

 

New Lookup Argument

New Lookup Argument Authors: rkm0959 (Gyumin Roh), Wei Dai, Mark (majabbour), Andrew He This work started from the problems Haichen Shen and Ying Tong presented at the ETH x ZK Research Workshop in Stanford. We thank them and many others from the research

zkresear.ch

 

'Cryptography' 카테고리의 다른 글

Optimal Ate Pairing, BLS12-381, BN254  (0) 2022.12.21
Elliptic Curve Pairings  (0) 2022.11.25
Sum-Check Protocol and Applications  (0) 2022.10.24
Polynomial Commitment Scheme from DARK  (0) 2022.10.14
ZK Hash Functions: Design & Analysis  (0) 2022.09.21