polynya

Posted on Nov 16, 2023Read on Mirror.xyz

The horrific inefficiencies of monolithic blockchains

Nothing here is new, and indeed, I’ve repeated all of this ad nauseum in 2021. Moreover, it’s completely absurd the industry is mostly obsessing over infrastructure in this day and age, when there are dozens, if not hundreds, of L1s and L2s alike which have barely any non-spam utilization after years of being live. Not to mention exponential growth of blockspace supply incoming in 2024, 2025 and beyond with basically an infinite supply of data availability (with different properties). The overwhelming bottleneck has been applications and user onboarding for more than a couple of years now, and with each passing day the gap between demand and supply becomes larger. (addendum: Worst still, there’s complete neglect for the valuable applications that have proven product-market fit). Frankly, I’ve given up on this industry, but I will keep trying in my own little way through the occasional blog post nevertheless.

I have not mentioned a single L1 or L2 in this post - I don’t give a rat’s arse about your petty, pointless bagholder fights, so please don’t drag me into that. I’m just here to tell you why monolithic blockchains are cripplingly bad technology, and why there’s orders of magnitude better technology to upgrade to.

Here’s how I define monolithic chains - blockchains where every user has to naïvely reprocess all transactions to verify integrity. The more transactions the network processes, the higher everyone’s hardware requirements. The more nodes in the network, the more inefficient and slower it becomes; or alternatively, you limit accessibility so very few people in very few places can run unsubsidized, independent nodes, effectively leading to a dystopia that’s infinitely more centralized than traditional finance. There are a myriad of other challenges brought to the fore over the span of years & decades, ultimately resulting in social, technical, and economic unsustainability. I have written a book’s worth content on sustainability, so I’ll save it here.

Let’s say you have 10,000 nodes in a network. IMO, this is not enough, and we should strive to have 100,000 nodes in different types of places across the world. We need nodes at homes, schools, government offices - in large cities, in villages, in Chile, in Papua New Guinea, and eventually in space. The whole point of a public blockchain is lost if you are not resistant for the worst-case scenarios. It’s too easy to be complacent about the optimistic scenario and fail at the very moment when blockchains are supposed to be the Phial of Galadriel. But I digress.

So, let’s say, you have 100,000 nodes in the endgame - each one has to reprocess all transactions. The overhead of the network is 100,000x right away, not to mention you are consuming insane amounts of bandwidth to make sure all 100,000 nodes are synced up. This is horrifically inefficient.

No, traditional light clients are not the solution. Firstly, traditional light clients are not trustless, but more importantly, you still need a significant cohort of the nodes reprocessing all transactions to verify integrity.

Fortunately, there are solutions to making things thousands of times more efficient. The two key technologies are validity proofs and data availability sampling. Make no mistake, every single monolithic blockchain seeking scale will upgrade to tech like validity proofs and data availability sampling or be risk obsoletion. (Note: of course, we also have fraud proofs, but I’ll focus on validity proofs)

I have discussed at length why validity proofs are a no-brainer, critical upgrade for all monolithic blockchains - but here’s the gist of it:

  1. You can push system requirements higher, so a validity proven execution layer is necessarily faster than an equivalent monolithic execution layer.

  2. A validity proof that’s 1 MB in size can represent the integrity of millions of transactions that would otherwise have taken thousands of supercomputers and GBs in bandwidth syncing across the thousands of nodes. This allows validity proven execution layers to potentially have significantly lower latencies than an equivalent monolithic execution layer as the verifying nodes need only sync and process a succinct proof.

  3. Finally, and crucially, instead of requiring an unlimited 10 Gbps connection with a supercomputer, the average user can now verify the integrity of on a mobile phone over 4G.

  4. There are many other benefits of validity proven execution layers - the possibility of privacy, for one. But perhaps the most exciting is that you can multiply throughput while retaining atomic composability and without fragmenting liquidity. So, let’s say, a monolithic execution layer tops out at 1,000 TPS. The validity proven equivalent execution layer can push that to 2,000 TPS or more. And then you can have 100 more of these chains aggregating proofs. You have gone from 1,000 TPS to 200,000 TPS while the cost of verification has gotten significantly lower. More importantly, the overall infrastructure cost of the network is now infinitely more efficient.

But of course, while validity proofs can compress a lot of computations and data, we still need some raw data. And this is where data availability sampling comes into play. In this system, the more nodes you have, the more data you can potentially process, effectively minimizing bandwidth as a bottleneck and cheating past the speed of light. So, you can scale far beyond what a monolithic blockchain will offer. However, I’m not going to spend much time on DAS because this is not going to be the bottleneck, maybe ever.

So, what are the drawbacks?

First, let me address the non-drawbacks:

  1. Cost: Validity proven execution layers and DAS proven data layers do have an upfront cost in generating the proofs, however they are orders of magnitude cheaper because of a fractional cost in verifying said proofs. For a network with 100,000 nodes, for example, network-wide costs will be at least 50,000x cheaper. Also, the cost of validity proofs continue to plummet, to the point that even something as complex as zkEVM was trivial nearly a year ago. Finally, the biggest cost in public blockchains is actually sybil resistance by economic security, which is another phenomenal benefit of validity proofs - you can now have basically an infinite number of chains sharing security, instead of fragmenting it to the point each chain has basically no security.

  2. Latency: Proof generation is very parallelizable. Indeed, because you have to wrangle a fraction of the data, given bandwidth is often the bottleneck and monolothic blockchains spend so much time doing so, latencies can even decrease as the tech matures.

  3. Complexity: Every leap forward in technology requires complexity, always has, always will. If something brings a 1,000,000x increase in efficiency, the right approach always is to master the complexity, battle-test it, not to just give up and cope with old tech. Otherwise, you’ll be obsoleted by those who do.

Debunking some more false dichotomies:

  1. Both monolithic and validity proven execution layers benefit from optimization at the VM, parallelization, and client level, and from faster hardware. Indeed, validity proven execution layers benefit more from faster hardware and parallelization - due to a) specialization of builders; and b) proof generation. With validity proofs, you can also have far greater experimentation and rapid innovation, where execution layers can specialize on execution. This is particularly useful for app-specific chains.

  2. It’s not horizontal vs. vertical scaling. Validity proven execution layers give you horizontal and vertical scaling simultaneously. This is what true parallelization looks like. Parallelization for each chain x parallelization across chains.

  3. “Integration” is not a property of monolithic or validity proofs. Both can be integrated at L1 with no compromises, or they can be separated at L2. There is more than one project that already does this, I’m not going to name then as mentioned above. Indeed, for a healthy ecosystem, you need validity proven execution layers at both L1 and L2 levels, as they have their own benefits and drawbacks. Choice is always great.

  4. Not only can validity proofs retain composability, it’s the best way to do so cross-chain. Indeed, it’s very likely monolithic chains will never cross-compose with each other and will always fragment liquidity; meanwhile we have multiple projects building cross-composing, liquidity-sharing validity proven chains.

The real drawback:

Timing: Next-generation tech like validity proofs and data availability sampling will take time - longer than I hoped. But steady progress is being made every day, and we now have multiple solutions in production, and more entering over the next couple of years. While I don’t know how long it’ll take, the proliferation of validity proofs has already started and is inevitable.

Look, it’s perfectly fine to have a monolithic blockchain today, the technology to push past its crippling limitations did not exist 5 years ago. But it’s also imperative to acknowledge the reality that next-gen tech like validity proofs & data availability sampling are here to stay, and the entire blockchain world will inevitably converge on this one design that makes so much sense. I bet you every single monolithic blockchain project worth its salt is researching validity proofs, and those furthest along in this will reap the rewards, while the laggards still gaslighting the crypto community by dismissing massive benefits of validity proofs will have a very difficult time in the future. Instead, just embrace the new tech.

This is the only currently known way for the blockchain world to achieve our endgame of global scale, all verified on our mobile phones. Monolithic blockchains CANNOT do either, neither scale, nor verification.

Unless you don’t need scale, of course, like Bitcoin.

I’ll end this by saying, once again, all of this post is totally pointless, and I feel utterly embarrassed for indulging in this discussion with this post. So, I’ll go back to talking about things that actually matter - applications, governance, UX and onboarding.