(1/25) @ethereum Roadmap: [Potential Danksharding] Consensus Updates
Today, Ethereum is not scalable, but there is a clear path from "a big blockchain" to The World Computer, and let me tell you... it is DANK.
Let's talk about the changes we need to make to consensus.
(2/25) @ethereum is the World Computer, a single, globally shared computing platform that exists in the space between a network of 1,000s of computers (nodes).
These nodes are real computers in the real world, communicating directly from peer to peer.
twitter.com/SalomonCrypto/status/1566078593150492675
(3/25) As of mid-September 2022, @ethereum has switched its consensus mechanism to Proof of Stake (PoS).
Tl;dr node operators stake $ETH in order to gain the role of validator, earn rewards and secure Ethereum. This stake can be deducted from in cases of malicious behavior.
twitter.com/SalomonCrypto/status/1579594609855934465
(4/25) Today, the World Computer is SLOW. The EVM is not a high performance environment, both execution and storage is expensive and we already push up against the limits of @ethereum.
And so, we must look for (credibly neutral) ways to scale.
(5/25) After years of research and development, the @ethereum community has found the best path forward: rollups.
Rollups are independent, high performance blockchains that settle to Ethereum. Rollups can be fast (and centralized) and STILL benefit from Ethereum security.
twitter.com/SalomonCrypto/status/1569461980821606403
(6/25) But rollups are only part of the solution; while they provide an incredible performance environment, they do not scale the storage capabilities of @ethereum.
In fact, because they are so fast (generating so much data) rollups make the problem worse.
(7/25) As of today, we have a plan: Danksharding. But we are still so far away from implementation and a lot of details need to be filled in.
So, let's begin with the big picture idea. We'll begin with blobs.
(8/25) Imagine the blockchain like a database that contains all the transactions that have ever happened on the World Computer. It is critical that this information is always directly available to any node; this is the internal state of @ethereum.
(9/25) Rollups, on the other hand, are completely outside of @ethereum. Yes, they settle (post a reconstructable copy of all transactions) on the World Computer, but that's just a copy.
It's NOT important that nodes can directly access this data.
(10/25) What IS important is that we can guarantee that this data was posted to @ethereum, is completely public and request-able by anyone and is 100% available for download.
So this is our design space: data blobs that exist outside of the EVM.
(11/25) Today, tomorrow and forever it will be 100% necessary for every node to download every block. But our new scheme will not force every node (or even any single node) to download all the data, just to ensure that the data is available in aggregate across the entire network.
(12/25) We can achieve this effect with some clever peer-to-peer (P2P) networking design.
Tl;dr in P2P networks nodes communicate directly with each other (instead of a centralized node). We can organize a network to store huge amounts of data without crushing any single node.
twitter.com/SalomonCrypto/status/1585042835459387392
(13/25) Good news and bad news:
Bad news: this is going to require some big changes to @ethereum... especially in the consensus mechanism.
Good news: a huge amount of the work is coming early in EIP-4844 (Proto-Danksharding).
twitter.com/SalomonCrypto/status/1559402384526258176
(14/25) EIP-4844 will deliver the following changes to @ethereum consensus:
- data blobs with an independent gas market
- changes needed at the intersection between execution and consensus
- separation between block verification and blob data availability verification
(15/25) EIP-4844 is a huge step forward, creating the blob market and making all the changes needed to the execution layer of @ethereum.
But there is still a lot of work that needs to be done, and a lot of designs that need to be finalized.
(16/25) The biggest obstacle we still need to overcome is the actual implementation of the erasure coding and the data availability sampling that is foundational to our P2P network design.
It doesn't matter how complete the architecture without the actual process of sampling.
(17/25) < NOTE >
We also still need to formalize the implementation of the KZG commitment scheme (theory/math is well understood).
We haven't discussed how KZG commitments will be used in Danksharding (yet), but just including now for completeness.
< /NOTE>
twitter.com/SalomonCrypto/status/1583705993300492288
(18/25) Full Danksharding is dependent on another, independent @ethereum upgrade: enshrined-PBS.
Although PBS was originally conceived in the context of MEV, it will become incredibly important for Danksharding.
twitter.com/SalomonCrypto/status/1570557757190983680
(19/25) It turns out that a lot of the work that will go into constructing a blob is pretty computationally intense and will (probably) be unrealistic for a minimal @ethereum node.
PBS will allow blob builders to centralize and specialize without compromising on security.
(20/25) A future with both PBS and Danksharding might look like this:
1) validator selected as a block proposer
2) proposer selects highest value block from block market
3) proposer selects highest value blobs from the blob market
4) proposer proposes the block/blobs combo
(21/25) This workflow assumes a robust block and blob market, with at least 2 honest competitors bidding for proposer selections.
But, in the worst case, the validator can just build their own. It's just both the blocks will be suboptimal and the blobs will not be filled.
(22/25) Another important aspect of @ethereum PoS that needs to change is the fork-choice rule.
Today, LMD-GHOST only looks at blocks. Under Danksharding, the protocol will also need to consider blobs (although some/all of this logic may be released with EIP-4844).
twitter.com/SalomonCrypto/status/1576016595452731394
(23/25) The new rule introduces the concept of "tight coupling" which states that a block is only eligible if all blobs in that block have passed a data availability check.
With tight coupling, if the chain contains even a single invalid blob, the entire chain is invalid.
(24/25) The rest of the changes needed are less interesting and more about implementation. Things like "which fields need to be added to blocks" and "how to distribute validators when validator count is unreasonably low."
But if you've made it this far, you get the big picture.
(25/25) @ethereum is the World Computer, and today the World Computer is SLOW and EXPENSIVE...
...today.
Just keep looking forward anon, you don't want to miss what's coming