# Noob questions: Plonk Paper

Hi, I really hope this is the right place to post my noob question. Otherwise please let me know where would be better. I am implementing Plonk as a hobby project in Python.

My question is about the linearisation polynomial in the 4th proving round p.28:

The second and last summand I see a plain z. I just wonder if thats a typo? It just feels like that should be the evaluation challenge?

2 Likes

Hi,
Indeed this is a typo Thanks for noticing, will be updated on eprint.

2 Likes

Ah cool! Thanks for the answer! I managed to implement the protocol now. I am really proud now:D.

There was one thing I stumbled over: In round 3 when t is split up, I needed to factor out x^(n+2), not x^n as the paper says. My understanding is that the third summand of t determines the degree as (n+1) + (n+1) + (n+1) + (n+2) - n = 3n+5…

1 Like

The splitting of t in the last section indeed seems to ignore the additional factors for zero-knowledge that increase degree a bit

2 Likes

Could you expand this answer please? Why can we skip those high degree factors?

We can’t at least if you want zero knowledge. it’s a mistake in the paper that needs to be fixed

1 Like

Ah okay I thought I was missing something, thanks a lot!

Finally fixed on eprint

ugg I think I got it wrong, it’s n+5, will upload again

I have some questions about the procedure used to add zero-knowledge to the polynomials the Prover creates.

In Round 1 of the Prover protocol, the wire polynomials a, b, and c each use two random scalars for blinding. In Round 2 of the Prover protocol, the prover uses three random scalars to blind the z polynomial. Why three random scalars here instead of two? Why not some other number of random scalars?

Is there a rule of thumb for knowing how many blinding factors are needed to get zero-knowledge?

Does the number of blinding factors needed change if the polynomials are in Lagrange basis? For instance, can you get the same blinding effect by appending two random values to a vector before interpolating?

1 Like

The rule is that if the poly is opened at d points you need d+1 blinding factors; to hide both the commitment (which is an evaluation at a secret point in the exponent, but still to prove zk holds you’ll need this to be totally random, this proof that zk holds is quite similar to existing proofs - e.g. in Marlin, but is a missing hole in the paper right now) and the d evaluation points.
So z is opened at 2 points and hence needs 3 blinding factors.

I think it’s fine to add random multiples of any poly that is zero on the vanishing set you’re actually checking constraints on.
So in Lagrange base you can indeed random coefficients to polys that are zero on the points where the constraints are checked on.

That’s the approach we take as well; I’ve been working on some analysis here. It’s unreviewed so maybe take it with a grain of salt, and let me know if you spot any issues.

To summarize, if we’re using Fiat-Shamir and we’re content with statistical ZK in the ROM, this should suffice:

• To blind each witness polynomial, append k random values, where k is the number of opening locations.
• To blind Z, append another k+1 random values to one of the witness polynomials, and add a copy constraint involving those k + 1 wires.

If we want perfect ZK, we need to carefully avoid certain edge cases, like challenge points being roots of Z.

Again, I’m pretty sure you need k+1 random values instead of k; you’ll need to show a simulator can simulate the proof distribution without knowing the witness, the proof contains the commitment on top of the openings, so you have to simulate all these k+1 values, which is easy to do when all k+1 are random.

In our “Plonk-Halo” context, the commitments are Pedersen commitments. Since they’re perfectly hiding, I don’t think we need the extra blinding factor, right? But yes, that makes sense if using unblinded commitments.

We could use unblinded commitments with an extra random value, but for now we’re just thinking of this as a layer on top of Halo, with its Pedersen commitments.

1 Like

I see, That makes sense.