(1/25) @ethereum roadmap: Rollups, Danksharding and Settlement Outside the EVM
Rollups are predicated on the idea that the complete, final record exists on-chain. Danksharding is about pushing this data outside the EVM (off-chain).
What does this mean for the Ethereum endgame?
(3/25) @ethereum is the World Computer, a globally shared computing platform that exists in the space between a network of 1000s of computers (nodes).
The Ethereum Virtual Machine (EVM) provides the virtual computer, the blockchain recording its history.
inevitableeth.com/home/ethereum/world-computer
(4/25) Each node runs a local version of the EVM, which is then held perfectly in sync with every other copy of the EVM through a process called Proof of Stake (PoS).
Any individual EVM is a window into the shared state of the World Computer.
inevitableeth.com/home/ethereum/network/consensus/pos
(5/25) At the end of the day, real computers need to run the @ethereum software. And so, the World Computer is limited by the minimum requirements it sets for nodes.
Here lies a fundamental trade-off: higher minimum requirements = less decentralization.
(6/25) Enter @ethereum's rollup-centric roadmap:
- computation will migrate from the EVM to rollups
- rollups will settle to Ethereum (post the final copy of ownership to mainnet)
- Danksharding will increase Ethereum's capacity for this data
inevitableeth.com/en/home/ethereum/upgrades/scaling
(7/25) From this point, I'll assume you are familiar with rollups:
- Optimistic rollups automatically accept batches, but leave a challenge period for fraud proofs
- ZK-rollups submit validity proofs along with batches
If not, here's your resource.
inevitableeth.com/en/home/ethereum/upgrades/scaling/execution
(9/25) In order to achieve @ethereum-based settlement, each rollup has a smart contract on mainnet.
The external rollup chain will execute thousands of transactions and then periodically send a compressed bundle to this smart contract.
(10/25) Today, there is one place to "store" data on @ethereum: you can pass it into a smart contract in the same way you would pass variables and other computational variables.
Herein lies the solution to the data availability bottleneck: create dedicated space for rollups.
(11/25) Long term, we have a solution: Danksharding.
Tl;dr @ethereum will gain "blobs" (large data chunks inaccessible by the EVM). Ethereum will guarantee the availability all blob data for a long but limited amount of time.
inevitableeth.com/en/home/ethereum/upgrades/scaling/data
(12/25) Post-Danksharding, when a blob is posted to @ethereum it will be immediately archived by services like @etherscan or the Portal Network.
Then after ~1 month, blobs will expire. Only a cryptographic signature will remain on the blockchain/within the EVM in perpetuity.
(13/25) At this point, there may be a nagging feeling in the back of your head; I've mentioned a few times now that blobs wont be accessible by the EVM...
The problem is that rollups, both optimistic and zk, need access to this data in order to function...
...right?
(14/25) Of course not! We've got the magic of KZG Commitments!
Tl;dr KZG commitments allow each blob to be compressed to a single value. Using elliptic curve cryptography, you can use this value to prove a specific piece of data existed in a blob.
inevitableeth.com/home/concepts/kzg-commitment
(15/25) Every single time a new block is created, a block builder will first gather all of the blobs and compute KZG commitments and add them to the block.
Though the EVM wont have access to the blobs directly, it will have access to these commitments.
(16/25) This is the true magic of KZG commitments - of elliptic curve cryptography! Using just this lightweight commitment, the EVM effectively has access into the blobs...
...even after they expire!
All with cryptographic, trustless certainty.
(17/25) So let's talk specifically about how this would be implemented. Once again, if you don't have a good understanding of how rollups work, please refer to the link on tweet 7
Let's imagine an optimistic and zk rollup in a post-Danksharding world.
(18/25) By definition, optimistic rollups don't need access to the blob data when its posted; they only need it when fraud proofs are being submitted.
Posting an update to an optimistic rollup would remain largely unchanged, simply adding a reference to the KZG commitment.
(19/25) During a fraud proof challenge period, the rollup-smart contract would need access to all the previous txns (stored in blobs).
The fraud proof will include KZG proofs, which will be verified against the stored reference; then verification can continue as it does today.
(20/25) On the other hand, the smart contract for zk-rollups don't need access to the transaction data ever - however, they do need to prove that they did post the transaction data to @ethereum.
Just to over-emphasize, settlement requires transaction data to be made available.
(21/25) Post Danksharding, zk-rollups will post 2 commitments:
- whatever zero-knowledge proof it uses internally
- the KZG commitment for the data in the blob with the transaction data
The the smart contract will verify both of these commitments refer to the same data.
(23/25) And that is how rollups are going to work in a post-Danksharding world.
The big takeaways:
- Danksharding will create blobs, data external to but provable by the EVM
- Rollups will use the same paradigms we have today, but will rely on KZG commitments