Road to ZK Implementation: Nethermind Client's Path to Proofs
benaadams
5 days ago
43
8
https://www.nethermind.io/blog/road-to-zk-implementation-nethermind-clients-path-to-proofs
benaadams5 days ago
Zero-knowledge proofs are basically a way to trust code execution without re-running it yourself.

Compile C# to a minimal RISC-V runtime. You run the program once, and instead of shipping all the outputs and logs, you generate a zk proof—a tiny math receipt that says "this execution was correct." Anyone can verify that receipt in milliseconds.

It's a bit like TEEs (Intel SGX, AMD SEV) where you outsource compute to someone else and rely on hardware to prove they ran it faithfully. The difference is zk proofs don’t depend on trusting special chips or vendors - it's just math.

Implications:

* Offload heavy workloads to untrusted machines but still verify correctness

* Lightweight sync and validation in distributed systems

* New trust models for cloud and datacenter compute

supermatt benaadams2 days ago
> a tiny math receipt

Im not familiar with how these zk proofs work, but for a PoW scheme I was working with the binary proofs were over 60kb - and they were sample based to decrease probability of cheating - not an absolute proof without full replay.

Do you have some info/resource to describe how these proofs work and can be so small?

cassonmarssupermatta day ago
There's different proof constructions, but many are depending on recursive SNARKs. You basically have an execution harness prover (proves that the block of VM instructions and inputs were correct in producing the output), and then a folding circuit prover (that proves the execution harness behaved correctly), recursively folding over the outer circuit to a smaller size. In Ethereum world, a lot of the SNARKs use a trusted setup — the assumption is that for as long as one contributor to the ceremony was honest (and that there wasn't a flaw in the ceremony itself), then the trusted setup can be trusted. The outsized benefit of the trusted setup approach is that it allows you to shift the computational hardness assumption over to the statistical improbability of being able to forge proof outputs for desired inputs. This of course, assumes that the trusted setup was safe, and that quantum computers aren't able to break dlog any time soon
supermattcassonmarsa day ago
Thanks - it seems I am way out of touch on this stuff so that should give me a good point to get started reading about it.
oldfuture2 days ago
One thing worth stressing is that the witness + executor layer is the critical trust boundary here.

In classic Ethereum, bugs are noisy: if one client diverges, other clients complain, and consensus fails until fixed.

In zk Ethereum, bugs can be silent: the proof validates the wrong execution and everyone downstream accepts it as truth.

I mean that the witness is like a transcript of everything the EVM touched while running a block: contract code, storage slots, gas usage, etc. so you can replay the block later using only this transcript, without needing the full Ethereum state.

For security, that witness ideally needs to be cryptographically bound to the block (e.g., via Merkle commitments), so no one can tamper with it.

The executor is the piece that replays that transcript deterministically. If it does so correctly, then you can generate a zk proof saying “this block really executed as Ethereum says it should.” But correctness here isn’t binary, it means bit-for-bit agreement with the Yellow Paper and all EIPs, including tricky cases like precompile gas rules. So the danger is in the details. If the witness omits even one corner case, or the executor diverges subtly, the zk system can still generate a perfectly valid proof, but of the wrong thing. zk proofs don’t check what you proved, only that you proved it consistently. In today’s consensus model, client bugs show up quickly when nodes disagree.

So while the compilation and toolchain work here is impressive, the real challenge is making sure the witness and executor are absolutely faithful to Ethereum semantics, with strong integrity guarantees. Otherwise you risk building cryptographic certainty, but about the wrong computation. This makes the witness/executor correctness layer the single point of failure in my view where human fallibility can undermine mathematical guarantees, looking forward to understand how this problem will be tackled

michaelsbradleyoldfuture2 days ago
Thank you for highlighting this important tradeoff!

> In zk Ethereum, bugs can be silent: the proof validates the wrong execution and everyone downstream accepts it as truth.

Are there any write-ups by folks who have run into this scenario? Maybe Linea while developing their zkEVM?

DennisPoldfuturea day ago
I guess one approach would be to have multiple independently-developed provers, and use them all for each proof. You'd spend more computation doing proofs but you wouldn't slow the network down since you could do it in parallel.
Ar-CurunirDennisPa day ago
The comment you're replying to is worried about the opposite case: where the proof is good, but the computation being proved is faulty. The analog would be to have the same prover prove execution of multiple node implementations.