Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
kashepavadan authored Oct 7, 2024
2 parents 0f60c19 + 89f8b01 commit 6514ab2
Show file tree
Hide file tree
Showing 5 changed files with 86 additions and 80 deletions.
2 changes: 1 addition & 1 deletion chapters/algebra-moonmath.tex
Original file line number Diff line number Diff line change
Expand Up @@ -1185,7 +1185,7 @@ \subsection{Prime Field Extensions}\label{field-extension}
\begin{remark}
Similarly to the way prime fields $\F_p$ are generated by starting with the ring of integers and then dividing by a prime number $p$ and keeping the remainder, prime field extensions $\F_{p^m}$ are generated by starting with the ring $\F_p[x]$ of polynomials and then dividing them by an irreducible polynomial of degree $m$ and keeping the remainder.

In fact, it can be shown that $\F_{p^m}$ is the set of all remainders when dividing any polynomial $Q\in \F_p[x]$ by an irreducible polynomial $P$ of degree $m$. This is analogous to how $\F_p$ is the set of all remainders when dividing integers by $p$.
In fact, it can be shown that $\F_{p^m}$ is the set of all remainders when dividing all of the polynomials $Q\in \F_p[x]$ by an irreducible polynomial $P$ of degree $m$. This is analogous to how $\F_p$ is the set of all remainders when dividing integers by $p$.
\end{remark}

Any field $\F_{p^m}$ constructed in the above manner is a field extension of $\F_p$. To be more general, a field $\F_{p^{m_2}}$ is a field extension of a field $\F_{p^{m_1}}$ if and only if $m_1$ divides $m_2$. From this, we can deduce that, for any given fixed prime number, there are nested sequences of subfields whenever the power $m_j$ divides the power $m_{j+1}$:
Expand Down
38 changes: 23 additions & 15 deletions chapters/circuit-compilers-moonmath.tex
Original file line number Diff line number Diff line change
Expand Up @@ -179,8 +179,8 @@ \subsection{Circom}
\begin{lstlisting}
template trivial_circuit() {

signal private input in1 ;
signal private input in2 ;
signal input in1 ;
signal input in2 ;

var outc1 = 0 ;
var inc1 = 7 ;
Expand Down Expand Up @@ -370,7 +370,7 @@ \subsubsection{The base-field type}
}
// subgraph connectors
nin1 -> {nmul1, nadd1} [xlabel="W_1", style=dashed, color=grey] ;
nin2 -> {nmul2, nadd2} [xlabel="I_2 ", style=dashed, color=grey] ;
nin2 -> {nmul2, nadd2} [xlabel="I_1 ", style=dashed, color=grey] ;
nmul4 -> nout1 [headlabel="W_3 ", style=dashed, color=grey] ;
nadd4 -> nout2 [headlabel="W_4 ", style=dashed, color=grey] ;
}
Expand All @@ -393,9 +393,9 @@ \subsubsection{The base-field type}
n6 [label="+"] ;

n1 -> {n5, n6} [xlabel="W_1"] ;
n2 -> {n5, n6} [xlabel="I_2 "] ;
n5 -> n3 [xlabel="W_3 "] ;
n6 -> n4 [label=" W_4"] ;
n2 -> {n5, n6} [xlabel="I_1 "] ;
n5 -> n3 [xlabel="W_2 "] ;
n6 -> n4 [label=" W_3"] ;
}
\end{center}
\end{example}
Expand Down Expand Up @@ -796,8 +796,8 @@ \subsubsection{The boolean Type}
\end{equation}
Common circuit languages typically provide a gadget or a function to abstract over this circuit such that programers can use the $\wedge$ operator without caring about the associated circuit. In \lgname{PAPER}, we define the following function that compiles to the $\wedge$-operator's circuit:
\begin{lstlisting}
fn AND(b_1 : BOOL, b_2 : BOOL) -> BOOL{
let AND : BOOL ;
fn AND(b_1 : BOOL, b_2 : BOOL) -> BOOL {
let AND : BOOL ;
AND <== MUL( b_1 , b_2) ;
return AND ;
}
Expand Down Expand Up @@ -939,7 +939,7 @@ \subsubsection{The boolean Type}
\end{align*}
Common circuit languages typically provide a gadget or a function to abstract over this circuit such that programers can use the $\lnot$ operator without caring about the associated circuit. In \lgname{PAPER}, we define the following function that compiles to the $\lnot$-operator's circuit:
\begin{lstlisting}
fn NOT(b : BOOL -> BOOL{
fn NOT(b : BOOL) -> BOOL{
let NOT : BOOL ;
let const c1 = 1 ;
let const c2 = -1 ;
Expand Down Expand Up @@ -1290,9 +1290,9 @@ \subsubsection{The boolean Type}
\end{align*}
The reason why this R1CS only contains a single constraint for the multiplication gate in the OR-circuit, while the general definition \ref{def:boolean-or} requires two constraints, is that the second constraint in \ref{def:boolean-or_constraints} only appears because the final addition gate is connected to an output node. In this case, however, the final addition gate from the OR-circuit is enforced in the left factor of the $I_{1}$ constraint. Something similar holds true for the negation circuit.

During a prover-phase, some public instance $I_5$ must be given. To compute a constructive proof for the statement of the associated languages with respect to instance $I_5$, a prover has to find four boolean values $W_1$, $W_2$, $W_3$ and $W_4$ such that
During a prover-phase, some public instance $I_1$ must be given. To compute a constructive proof for the statement of the associated languages with respect to instance $I_1$, a prover has to find four boolean values $W_1$, $W_2$, $W_3$ and $W_4$ such that
$$
\left( W_1 \vee W_2 \right) \wedge (W_3 \wedge \lnot W_4) = I_5
\left( W_1 \vee W_2 \right) \wedge (W_3 \wedge \lnot W_4) = I_1
$$
holds true. In our case neither the circuit nor the \lgname{PAPER} statement specifies how to find those values, and it is a problem that any prover has to solve outside of the circuit. This might or might not be true for other problems, too. In any case, once the prover found those values, they can execute the circuit to find a valid assignment.

Expand Down Expand Up @@ -2025,11 +2025,19 @@ \subsubsection{Loops} In many programming languages, various loop control struct
\subsection{Binary Field Representations} In applications, it is often necessary to enforce a binary representation of elements from the \texttt{field} type. To derive an appropriate circuit over a prime field $\F_p$, let $m=|p|_2$ be the smallest number of bits necessary to represent the prime modulus $p$. Then a bitstring $<b_0,\ldots,b_{m-1}>\in \{0,1\}^m$ is a binary representation of a field element $x\in\F_p$, if and only if
\begin{equation}
\label{def:binary_field_rep}
x = b_0\cdot 2^0 + b_1\cdot 2^1 + \ldots + b_m\cdot 2^{m-1}
x = b_0\cdot 2^0 + b_1\cdot 2^1 + \ldots + b_{m-1}\cdot 2^{m-1}
\end{equation}
In this expression, addition and exponentiation is considered to be executed in $\F_p$, which is well defined since all terms $2^j$ for $0\leq j < m$ are elements of $\F_p$. Note, however, that in contrast to the binary representation of unsigned integers $n\in\N$, this representation is not unique in general, since the modular $p$ equivalence class might contain more than one binary representative.

Considering that the underlying prime field is fixed and the most significant bit of the prime modulus is $m$, the following circuit flattens equation \ref{def:binary_field_rep}, assuming all inputs $b_1$, $\ldots$, $b_m$ are of boolean type.
In this expression, addition and exponentiation is considered to be
executed in $\F_p$, which is well defined since all terms $2^j$ for
$0 \leq j \leq m$ are elements of $\F_p$. Note, however, that in
contrast to the binary representation of unsigned integers $n\in\N$,
this representation is not unique in general, since the modular $p$
equivalence class might contain more than one binary representative.

Considering that the underlying prime field is fixed and the most
significant bit of the prime modulus is $m-1$, the following circuit
flattens equation \ref{def:binary_field_rep}, assuming all inputs
$b_0$, $\ldots$, $b_{m-1}$ are of boolean type.
\begin{center}
\digraph[scale=0.3]{BINARYREP}{
forcelabels=true;
Expand Down
10 changes: 5 additions & 5 deletions chapters/elliptic-curves-moonmath.tex
Original file line number Diff line number Diff line change
Expand Up @@ -537,11 +537,11 @@ \subsection{Projective \concept{short Weierstrass} form}

Recalling the definition of projective planes \ref{sec:planes},\sme{S: move that section here?} we know that points at infinity are handled as ordinary points in projective geometry. Therefore, it makes sense to look at the definition of a \concept{short Weierstrass} curve in projective geometry.

To see what a \concept{short Weierstrass} curve in projective coordinates is, let $\F$ be a finite field of order $q$ and characteristic $>3$, let $a,b\in \F$ be two field elements such that $\Zmod{4a^3+ 27b^2}{q}\neq 0$ and let $\F\mathrm{P}^2$ be the projective plane over $\F$ as introduced in \secname{} \ref{sec:planes}. Then a \term{projective \concept{short Weierstrass} elliptic curve} over $\F$ is the set of all points $[X:Y:Z]\in \F\mathrm{P}^2$ from the projective plane that satisfy the cubic equation $Y^2\cdot Z = X^3+a\cdot X\cdot Z^2 + b\cdot Z^3$:
To see what a \concept{short Weierstrass} curve in projective coordinates is, let $\F$ be a finite field of order $q$ and characteristic $>3$, let $a,b\in \F$ be two field elements such that $\Zmod{4a^3+ 27b^2}{q}\neq 0$ and let $\F\mathbb{P}^2$ be the projective plane over $\F$ as introduced in \secname{} \ref{sec:planes}. Then a \term{projective \concept{short Weierstrass} elliptic curve} over $\F$ is the set of all points $[X:Y:Z]\in \F\mathbb{P}^2$ from the projective plane that satisfy the cubic equation $Y^2\cdot Z = X^3+a\cdot X\cdot Z^2 + b\cdot Z^3$:

\begin{equation}
\label{def:projective_cubic_equation}
E(\F\mathrm{P}^2) = \{[X:Y:Z]\in \F\mathrm{P}^2\;|\; Y^2\cdot Z = X^3+a\cdot X\cdot Z^2 + b\cdot Z^3 \}
E(\F\mathbb{P}^2) = \{[X:Y:Z]\in \F\mathbb{P}^2\;|\; Y^2\cdot Z = X^3+a\cdot X\cdot Z^2 + b\cdot Z^3 \}
\end{equation}

To understand how the point at infinity is unified in this definition, recall from \secname{} \ref{sec:planes} that, in projective geometry, points at infinity are given by projective coordinates $[X:Y:0]$. Inserting representatives $(x_1,y_1,0)\in [X:Y:0]$ from those coordinates into the defining cubic equation \ref{def:projective_cubic_equation} results in the following identity:
Expand Down Expand Up @@ -688,10 +688,10 @@ \subsubsection{Projective Group law}
\subsubsection{Coordinate Transformations} As we can see by comparing the examples
\ref{ex:E1F5-projective} and \ref{ex:E1F5-projective},\sme{same example twice} there is a close relation between the affine and the projective representation of a \concept{short Weierstrass} curve. This is not a coincidence. In fact, from a mathematical point of view, projective and affine \concept{short Weierstrass} curves describe the same thing, as there is a one-to-one correspondence (an isomorphism) between both representations for any parameters $a$ and $b$.

To specify the correspondence, let $E(\F)$ and $E(\F\mathrm{P}^2)$ be an affine and a projective \concept{short Weierstrass} curve defined for the same parameters $a$ and $b$. Then, the function in \eqref{eq:weierstrass-isomorphism-map} maps points from the affine representation to points from the projective representation of a \concept{short Weierstrass} curve. In other words, if the pair of field elements $(x,y)$ satisfies the affine \concept{short Weierstrass} equation $y^2= x^3 + ax + b$, then all homogeneous coordinates $(x_1,y_1,z_1)\in [x:y:1]$ satisfy the projective \concept{short Weierstrass} equation $y_1^2\cdot z_1= x_1^3 + ay_1\cdot z_1^2 + b\cdot z_1^3$.
To specify the correspondence, let $E(\F)$ and $E(\F\mathbb{P}^2)$ be an affine and a projective \concept{short Weierstrass} curve defined for the same parameters $a$ and $b$. Then, the function in \eqref{eq:weierstrass-isomorphism-map} maps points from the affine representation to points from the projective representation of a \concept{short Weierstrass} curve. In other words, if the pair of field elements $(x,y)$ satisfies the affine \concept{short Weierstrass} equation $y^2= x^3 + ax + b$, then all homogeneous coordinates $(x_1,y_1,z_1)\in [x:y:1]$ satisfy the projective \concept{short Weierstrass} equation $y_1^2\cdot z_1= x_1^3 + ay_1\cdot z_1^2 + b\cdot z_1^3$.

\begin{equation}\label{eq:weierstrass-isomorphism-map}
I : E(\F) \to E(\F\mathrm{P}^2)\;:\;
I : E(\F) \to E(\F\mathbb{P}^2)\;:\;
\begin{array}{lcl}
(x,y) &\mapsto & [x:y:1]\\
\mathcal{O} &\mapsto & [0:1:0]
Expand Down Expand Up @@ -1002,7 +1002,7 @@ \subsection{Twisted Edwards group law}
(x_1, y_1) \oplus (x_2, y_2) =\left(\frac{x_1y_2+y_1x_2}{1 +dx_1x_2y_1y_2},\frac{y_1y_2-ax_1x_2}{1-dx_1x_2y_1y_2}\right)
\end{equation}

In order to see what the neutral element of the group law is, first observe that the point $(0,1)$ is a solution to the \concept{twisted Edwards} equation $a\cdot x^{2} + y^2 =1+ d\cdot x^{2}\cdot y^2$ for any parameters $a$ an $d$, and hence $(0,1)$ is a point on any \concept{twisted Edwards} curve. It can be shown that $(0,1)$ serves as the neutral element, and that the inverse of a point $(x_1, y_1)$ is given by the point $(-x_1, y1)$.
In order to see what the neutral element of the group law is, first observe that the point $(0,1)$ is a solution to the \concept{twisted Edwards} equation $a\cdot x^{2} + y^2 =1+ d\cdot x^{2}\cdot y^2$ for any parameters $a$ an $d$, and hence $(0,1)$ is a point on any \concept{twisted Edwards} curve. It can be shown that $(0,1)$ serves as the neutral element, and that the inverse of a point $(x_1, y_1)$ is given by the point $(-x_1, y_1)$.
\begin{example}
\label{example:TETJJ13}
Let's look at the \curvename{Tiny-jubjub} curve in Edwards form from \eqref{TJJ13-twisted-edwards} again. As we have seen, this curve is given by as follows:
Expand Down
16 changes: 8 additions & 8 deletions chapters/statements-moonmath.tex
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ \subsection{Decision Functions}

To give an unusual example strange enough to highlight the point, consider the programming language \href{https://en.wikipedia.org/wiki/Malbolge}{Malbolge}. This language was specifically designed to be almost impossible to use, and writing programs in this language is a difficult task. An interesting claim is therefore the statement: ``There exists a computer program in Malbolge". As it turned out, proving this statement constructively, that is, providing an example instance of such a program, is not an easy task: it took two years after the introduction of Malbolge to write a program that its compiler accepts. So, for two years, no one was able to prove the statement constructively.

To look at the high-level description of Malbolge more formally, we write $L_{Malbolge}$ for the language that uses the ASCII table as its alphabet, and its words are strings of ASCII letters that the Malbolge compiler accepts. Proving the statement ``There exists a computer program in Malbolge'' is equivalent to the task of finding some word $x\in L_{Malbolge}$. The string in \eqref{malbolge-string} below is an example of such a proof, as it is excepted by the Malbolge compiler, which compiles it to an executable binary that displays ``Hello, World.'' \sme{add reference}. In this example, the Malbolge compiler therefore serves as the verification process.
To look at the high-level description of Malbolge more formally, we write $L_{\text{Malbolge}}$ for the language that uses the ASCII table as its alphabet, and its words are strings of ASCII letters that the Malbolge compiler accepts. Proving the statement ``There exists a computer program in Malbolge'' is equivalent to the task of finding some word $x\in L_{\text{Malbolge}}$. The string in \eqref{malbolge-string} below is an example of such a proof, as it is excepted by the Malbolge compiler, which compiles it to an executable binary that displays ``Hello, World.'' \sme{add reference}. In this example, the Malbolge compiler therefore serves as the verification process.

\begin{multline}\label{malbolge-string}
\scriptstyle (=<':9876Z4321UT.-Q+*)M'\&\%\$H"!~\}|Bzy?=|\{z]KwZY44Eq0/
Expand Down Expand Up @@ -1323,17 +1323,17 @@ \subsubsection{QAP representation} To understand what Quadratic Arithmetic Progr
To compute $A_2$, we note that the set $S_{A_2}$ in our example is given by $S_{A_2}=\{(m_1,a^1_2), (m_2,a_2^2)\} = \{(5,1), (7,0)\}$. Using this set, we get:
\begin{align*}
A_2(x) & = a^1_2\cdot(\frac{x-m_2}{m_1-m_2}) + a^2_2\cdot(\frac{x-m_1}{m_2-m_1})
= 1\cdot(\frac{x-7}{5-7}) + 0\cdot(\frac{x-5}{7-5}) \\
A_2(x) & = a^1_2\cdot \left(\frac{x-m_2}{m_1-m_2}\right) + a^2_2\cdot\left(\frac{x-m_1}{m_2-m_1}\right)
= 1\cdot\left(\frac{x-7}{5-7}\right) + 0\cdot\left(\frac{x-5}{7-5}\right) \\
& = \frac{x-7}{-2}
= \frac{x-7}{11} & \text{\# } 11^{-1}=6 \\
& = 6(x-7)
= 6x + 10 & \text{\# } -7 = 6 \text{ and } 6\cdot 6 = 10
\end{align*}
To compute $A_5$, we note that the set $S_{A_5}$ in our example is given by $S_{A_5}=\{(m_1,a^1_5), (m_2,a^2_5)\} = \{(5,0), (7,1)\}$. Using this set, we get:
\begin{align*}
A_5(x) & = a^1_5\cdot(\frac{x-m_2}{m_1-m_2}) + a^2_5\cdot(\frac{x-m_1}{m_2-m_1})
= 0\cdot(\frac{x-7}{5-7}) + 1\cdot(\frac{x-5}{7-5}) \\
A_5(x) & = a^1_5\cdot\left(\frac{x-m_2}{m_1-m_2}\right) + a^2_5\cdot\left(\frac{x-m_1}{m_2-m_1}\right)
= 0\cdot\left(\frac{x-7}{5-7}\right) + 1\cdot\left(\frac{x-5}{7-5}\right) \\
& = \frac{x-5}{2} & \text{\# } 2^{-1}=7 \\
& = 7(x-5)
= 7x + 4 & \text{\# } -5 = 8 \text{ and } 7\cdot 8 = 4
Expand Down Expand Up @@ -1397,12 +1397,12 @@ \subsubsection{QAP Satisfiability} One of the major advantages of Quadratic Arit
Verifying a constructive proof in the case of a circuit is achieved by executing the circuit and then by comparing the result against the given proof. Verifying the same proof in the R1CS picture means checking if the elements of the proof satisfy the R1CS equations. By contrast, verifying a proof in the QAP picture is done by polynomial division of the proof $P$ by the target polynomial $T$. The proof is verified if and only if $P$ is divisible by $T$.
\begin{example} Consider the Quadratic Arithmetic Program $QAP(R_{3.fac\_zk})$ from \examplename{} \ref{ex:3-fac-QAP} and its associated R1CS from equation \ref{ex:3-factorization-r1cs}. To give an intuition of how proofs in the language $L_{QAP(R_{3.fac\_zk})}$ look like, let's consider the instance $I_1=11$. As we know from \examplename{} \ref{ex:3-fac-zk-circuit_2}, $(W_1,W_2,W_3,W_5)=(2,3,4,6)$ is a proper witness, since
$(<I_1>;<W_1,W_2,W_3,W_5>)=(<11>;<2,3,4,6>)$ is a valid circuit assignment and hence, a solution to $R_{3.fac\_zk}$ and a constructive proof for language $L_{R_{3.fac\_zk}}$.
\begin{example} Consider the Quadratic Arithmetic Program $QAP(R_{3.fac\_zk})$ from \examplename{} \ref{ex:3-fac-QAP} and its associated R1CS from equation \ref{ex:3-factorization-r1cs}. To give an intuition of how proofs in the language $L_{QAP(R_{3.fac\_zk})}$ look like, let's consider the instance $I_1=11$. As we know from \examplename{} \ref{ex:3-fac-zk-circuit_2}, $(W_1,W_2,W_3,W_4)=(2,3,4,6)$ is a proper witness, since
$(<I_1>;<W_1,W_2,W_3,W_4>)=(<11>;<2,3,4,6>)$ is a valid circuit assignment and hence, a solution to $R_{3.fac\_zk}$ and a constructive proof for language $L_{R_{3.fac\_zk}}$.
In order to transform this constructive proof into a knowledge proof in language $L_{QAP(R_{3.fac\_zk})}$, a prover has to use the elements of the constructive proof to compute the polynomial $P_{(I;W)}$.
In the case of $(<I_1>;<W_1,W_2,W_3,W_5>)=(<11>;<2,3,4,6>)$, the associated proof is computed as follows:
In the case of $(<I_1>;<W_1,W_2,W_3,W_4>)=(<11>;<2,3,4,6>)$, the associated proof is computed as follows:
\begin{align*}
P_{(I;W)} = & \scriptstyle \left(A_0 + \sum_{j}^n I_j\cdot A_j + \sum_{j}^m W_j\cdot A_{n+j} \right) \cdot \left(B_0 + \sum_{j}^n I_j\cdot B_j + \sum_{j}^m W_j\cdot B_{n+j} \right)
-\left(C_0 + \sum_{j}^n I_j\cdot C_j + \sum_{j}^m W_j\cdot C_{n+j} \right)\\
Expand Down
Loading

0 comments on commit 6514ab2

Please sign in to comment.