Hello Noir! [Part 2]
In Part 1 we wrote a circuit, compiled it with nargo, and got two artifacts: a circuit definition and a witness. Now we will use them to actually perform the cryptographic proof check.
1. Where we left off
At the end of Part 1, we had two files in target/:
hello_world.json- the compiled circuit (ACIR bytecode)hello_world.gz- the witness (our specific input values that satisfy the constraints)
All nargo told us was "yes, these inputs work." That's not a proof anyone else can verify - it's just a local check. To produce an actual cryptographic proof, we need Barretenberg (bb).
2. Generating a proof with Barretenberg
With our Prover.toml set to x = "2" and y = "1" (recall: x is private, y is public), we run:
bb prove -b ./target/hello_world.json -w ./target/hello_world.gz \
--write_vk --verifier_target evm -o ./target
Scheme is: ultra_honk, num threads: 8 (mem: 8.10 MiB)
CircuitProve: Proving key computed in 29 ms (mem: 24.21 MiB)
Public inputs saved to "./target/public_inputs" (mem: 28.56 MiB)
Proof saved to "./target/proof" (mem: 28.56 MiB)
VK saved to "./target/vk" (mem: 28.56 MiB)
VK Hash saved to "./target/vk_hash" (mem: 28.56 MiB)
Let's break down the flags:
-b- path to the compiled circuit (ACIR bytecode)-w- path to the witness--write_vk- also generate the verification key alongside the proof--verifier_target evm- target the EVM. This does two things: uses Keccak256 (the EVM has a dedicated opcode for it, making verification gas-efficient) and enables the zero-knowledge property (the proof reveals nothing about private inputs)-o- output directory
This produced four files in target/:
| File | Size | What it is |
|---|---|---|
proof | 7,488 bytes | The cryptographic proof |
vk | 1,888 bytes | Verification key |
vk_hash | 32 bytes | Hash of the vk |
public_inputs | 32 bytes | The public inputs (y = 1) |
What is the verification key? It's derived from the circuit structure alone - not from your inputs. It encodes the circuit's "shape": how many gates, what type, how they're wired. Anyone can use it to verify proofs for this circuit. Think of it as the circuit's public fingerprint. The same vk works for any valid proof of this circuit, regardless of what specific values of x and y were used.
One detail worth noting: bb prepends the public inputs to the proof blob. The verifier contract knows to extract them from there.
3. Verifying the proof
bb verify -p ./target/proof -k ./target/vk --verifier_target evm
Scheme is: ultra_honk, num threads: 8 (mem: 8.11 MiB)
Proof verified successfully (mem: 18.36 MiB)
Notice what we did NOT pass: the witness. That would defeat the purpose, wouldn't it? The whole point is that the verifier never sees your private inputs. All it needs is the proof and the verification key.
The proof says: "someone knows a value x such that x != y, where y = 1." It doesn't say what x is.
4. Generating a Solidity verifier
Now we want this verification to happen on-chain. bb can generate a Solidity contract that does exactly the same check:
bb write_solidity_verifier -k ./target/vk --verifier_target evm -o ./target/Verifier.sol
Scheme is: ultra_honk, num threads: 8 (mem: 8.75 MiB)
ZK Honk solidity verifier saved to "./target/Verifier.sol" (mem: 9.87 MiB)
The result is a 2,449-line Solidity file. What's inside:
- A
HonkVerifiercontract that inherits fromBaseZKHonkVerifier - The verification key hardcoded as constants (circuit size, elliptic curve points, etc.)
- Pairing check logic using EVM precompiles (
ecAdd,ecMul,ecPairing,modexp) - A single entry point:
function verify(bytes calldata proof, bytes32[] calldata publicInputs) external returns (bool)
This works on any EVM chain that supports the required precompiles - Ethereum mainnet, most L2s, and testnets.
5. Deploying with Foundry
Let's deploy the verifier and verify a proof on-chain. First, set up a Foundry project:
forge init verifier-deploy
cd verifier-deploy
Copy the generated verifier:
cp ../hello_world/target/Verifier.sol src/Verifier.sol
We also need to allow Foundry to read our proof file. Add this to foundry.toml:
fs_permissions = [{ access = "read", path = "../hello_world/target" }]
Deploy script
// script/Deploy.s.sol
// SPDX-License-Identifier: MIT
pragma solidity >=0.8.21;
import "forge-std/Script.sol";
import "../src/Verifier.sol";
contract DeployScript is Script {
function run() external {
vm.startBroadcast();
HonkVerifier verifier = new HonkVerifier();
console.log("HonkVerifier deployed to:", address(verifier));
vm.stopBroadcast();
}
}
Verify script
// script/Verify.s.sol
// SPDX-License-Identifier: MIT
pragma solidity >=0.8.21;
import "forge-std/Script.sol";
import "../src/Verifier.sol";
contract VerifyScript is Script {
function run() external {
bytes memory proofBytes = vm.readFileBinary("../hello_world/target/proof");
bytes32[] memory publicInputs = new bytes32[](1);
publicInputs[0] = bytes32(uint256(1)); // y = 1
vm.startBroadcast();
HonkVerifier verifier = new HonkVerifier();
console.log("HonkVerifier deployed to:", address(verifier));
bool result = verifier.verify(proofBytes, publicInputs);
console.log("Proof verified:", result);
vm.stopBroadcast();
}
}
Running it
Start a local testnet and deploy:
anvil --code-size-limit 50000
The --code-size-limit flag is needed because HonkVerifier exceeds the default EIP-170 contract size limit of 24,576 bytes (ours is ~33K). This is fine for local testing. For production, you'd use --optimized when generating the Solidity verifier or split the contract into libraries.
In another terminal:
forge script script/Verify.s.sol \
--rpc-url http://127.0.0.1:8545 \
--private-key 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 \
--broadcast --code-size-limit 50000
Script ran successfully.
== Logs ==
HonkVerifier deployed to: 0xe7f1725E7734CE288F8367e1Bb143E90bb3F0512
Proof verified: true
ONCHAIN EXECUTION COMPLETE & SUCCESSFUL.
The proof verified on-chain. The same proof that bb verified locally now passes through a Solidity contract running on an EVM.
6. Trust and threat model
Before you ship anything, consider three attack surfaces.
Prover honesty. The circuit enforces constraints, not truth. Nobody can prove that 2 != 2 - the constraint system rejects it. But nothing stops someone from proving 2000 != 18 and claiming that proves their age. The circuit guarantees mathematical correctness of the relationship between inputs, not that the inputs themselves are meaningful. External anchoring (signed attestations, on-chain data, oracles) is required to bind proof inputs to real-world facts.
The bb binary. bb is a local executable. If someone swaps it for a modified version, they could generate proofs that pass a compromised verifier. In any system where the prover and verifier are different entities, the verifier must run its own trusted copy of bb (or verify on-chain where the contract is the trust anchor).
The verifier contract. It's source code - editable before deployment. If the same entity generates the proof and deploys the verifier, there's a circular trust problem. For production, the verifier contract should be deployed by a trusted third party, verified on a block explorer, and ideally immutable (no proxy, no upgradeability). Or at the very least, governed by a multisig with a timelock.
None of these are flaws in the cryptography. These are system design questions that any production deployment needs to answer.
We went from compiled artifacts to a verified on-chain proof. The circuit is trivial, but the pipeline is the same one you'd use for anything more complex - age verification, credential checks, private voting. The hard part isn't the tooling. It's designing the system around it.
Recommended reading
- Noir docs - Proving backend - official getting started guide
- Barretenberg - proving backend documentation
- Foundry Book - Solidity development framework
- EVM precompiles - the precompiled contracts that make on-chain verification possible