Cookies Research

发布于 2023-05-15到 Mirror 阅读

Empire Podcast Summary: How Rollups Will Decentralize Their Sequencer | Josh Bowen, Ben Fisch

Josh Bowen, CEO of Astria Labs, together with Ben Fisch, CEO of Espresso Systems, joined Jason and Santiago on Empire, discussing about the decentralized sequencers landscape and the vision that both Astria Labs and Espresso Systems have to build up this novel piece of infrastructure. This was certainly a thought provoking podcast, with evidently differing points of views from both founders. Hope you enjoy this and as always, text in italics are my own thoughts.

Disclaimer: Some content has been rephrased for better understanding.

Santiago: Let’s start off with some brief introduction.

Josh (Astria): The CEO and co-founder of Astria. Prior to Astria, Josh was at Celestia, working mostly on how to deploy rollups more generally on Celestia, taking a slightly distinct model from existing L2s in the existing known paradigm (i.e. Optimism & Arbitrum). Prior to Celestia, Josh worked at The Graph and Google. The bulk of the research in crypto was focused on scalability and Josh saw shared / decentralized sequencers as a continued extension of this scalability research.

Ben (Espresso): Got introduced to crypto through academic research, while doing a PhD in applied cryptography in Stanford. Started working on applications of cryptography to blockchain since 2015, focused on these areas:

  • Alternative ways of doing consensus in a permissionless network (Proof-of-Work, Proof-of-Space, Proof-of-State)

  • Using cryptographic tools to scale blockchain performance

Was previously at Filecoin, where he worked on problems related to randomness beacons which is incorporated into present day Ethereum. Building Espresso Systems is a long term passion to discover blockchain scaling without compromising on what they are supposed to do and their core principals of decentralization and fault tolerance.

Jason: What are sequencers and validators?

Ben (Espresso): Sequencer is a term that has arisen from this more modular description of blockchain.

The original concept of a blockchain is as follows: There is a distributed virtual machine (VM) and it has to order transactions (txns) to a state machine and execute them.

Even with some roles outsourced to L2s, mainly proving and execution, which is the core of how rollups achieve scalability or remove computational bottlenecks in blockchain, there are still other roles and functions that have to be taken care of. To elaborate on this, if rollup is proving the result of executing some sequence of txns → How do these txns get ordered? Do they get ordered and not executed by the L1? Or is there an external system to handle the ordering? This is similar to how the data availability (DA) layer has been outsourced to DA layers.

All of the above mentioned functions are required for consensus. First is to make data available, which has to be ordered by the sequencer (whether it is a single sequencer or a decentralized consensus protocol). These txns are then executed in the order they have been ordered to derive the resulting state of the machine. Afterwards, all other clients connecting to the system have to be convinced of that current state.

Jason: What is the state of how sequencers work on rollups today?

Josh (Astria): Rollups today generally conflate two components

(1) Sequencing: Ordering of the blocks. Find the given fork of a chain and append the block onto that based on the canonical ordering. Sequencing can be seen as the act of taking a set of txns in a mempool (or whichever other way the unordered txns are gathered) → Producing the ordered block → Attesting to the ordering, which is also known as validation.

(2) Execution: This involves generating the state root. The given state machine transition function is applied over an ordered set of blocks on top of the previous state root at the end of previous block.

The existing state of rollups is as follows: Centralized sequencers are run by Optimism or Arbitrum who gives the order of the block → These txns are batched → Txns are executed → Submitted to call data on the contract they have on Ethereum → Followed by a second step of attesting to a given state root and actual state of state machine for L2.

Key distinction to shared sequencing is separating the sequencing into just ordering. Whereas currently, rollups do both the ordering and the validation and attestation of the given state root.

Jason: There are 3 layers today: DA, sequencing and execution. Is it fair to say that right now sequencing, determining the state, and execution are all happening in 1 layer. But in the future, these will be owned by different folks that specialize in each of the layers?

Ben (Espresso): Today, most L1 blockchains are doing all 3 of these. Most rollups on Ethereum use Ethereum for DA but are taking ownership of, and centralizing the sequencing, execution and proving. Ben expects to see these roles become more modular.

However, it is not clear as to whether proving needs to be decentralized. Once the txn order has been determined in the state machine, anyone can submit a proof, it is not a permissioned role. Same thing goes for fraud proof. But the determination of which txns get to be included and in what order should not be centralized to retain blockchain’s original properties.

Sequencing was centralized initially for performance, mainly because it was too expensive or not user experience (UX) friendly to use Ethereum for both availability and ordering.

Santiago: How complicated is it, in the shoes of Arbitrum and Optimism, to integrate decentralized sequencing? Is it fair to say that they intentionally punted this, hoping for someone to fix it. Or are the teams employing a sort of progressive decentralization where it is more of a technical problem and they hope that the UX will get vastly better at some point.

Ben (Espresso): Ben highlighted that almost all projects today have decentralization in their roadmap. It is difficult for a scalability solution to brand themselves if they were to say ‘we solved the scalability problem by centralizing the process of ordering your data’ because this loses the core principals of:

  1. Credible neutrality of the system

  2. Anti-monopolistic behavior of the system

It is likely that existing rollups see decentralization as an iterative process. Given that it is already difficult to build zkEVMs and optimistic VMs, teams are likely to prefer starting with a centralized sequencer as it is simpler and can kick start the product. Eventually, these rollups are either planning on building their own decentralized solution for sequencing, or hoping for someone to come up with a solution to which they can plug in.

Santiago: From a user’s perspective, what’s the worst that can happen in today’s environment? Where it’s centralized, what are the biggest risks and the difference between Arbitrum and Optimism?

Ben (Espresso): This question applies to blockchain in general. What are the drawbacks of a centralized system, and what are the advantages of blockchain? The answer to this is not straightforward; it's rather nuanced. We can look at the evolution of how apps have been built on blockchain and explore whether decentralization has played a role in that.

When comparing centralized and decentralized systems, there are two important aspects to consider: credible neutrality and the economic perspective. One of the incredible thing about blockchain is that it achieves services with network effects without monopolization. One of the reasons as to why internet services are so monpolized is because of the extremely strong network effects, e.g. it would be hard to compete with Facebook. It becomes challenging for users to switch to a new system because all the data, users, and liquidity are concentrated in one place. Similarly, it's difficult for another blockchain to compete with Ethereum because liquidity, users, and apps are already established there.

However, due to the decentralization of its validator set, specifically the set of nodes that determine what gets included, how it's priced, and how it's ordered, competition among different nodes is fostered.

Centralization leads to short-sighted and myopic behavior from participants, rather than engaging in long-term strategies to price out users who are not willing to pay enough. In a centralized blockchain, we might see pricing strategies that limit txns to only one party willing to pay a significant amount, potentially setting the price above the market-clearing price (where supply meets demand) in an attempt to maximize revenue.

The scalability problem arises when we want to increase the supply side without compromising decentralization. If we increase supply but lose decentralization, it essentially results in a monopolistic system that uses monopolistic strategies to determine resource allocation.

Jason: To summarize, it seems that the risks of having a centralized sequencer are:

  • Censorship (Risks of re-ordering)

  • Risks of sequencer going down

  • Monopolistic behavior

Santiago: Are users’ funds ever at risk? Is there a backdoor where a rug could occur? A lot of projects claim that the worst is a delay but users will always get their funds back because there’s no ability for the team behind the project to rug users and take assets.

Josh (Astria): With the fundamental structure, the only party allowed to append new blocks to the chain is the centralized party (sequencer). However, this does not mean that Optimism and Arbitrum have access to users' private keys to sign messages that move funds from one account to another. They can, however, hard fork the chain and zero out user funds. This action is detectable, and users can connect to the RPC and submit fraud proofs to prove an invalid state transition has occurred.

Assuming there is no possibility for bad actors to hard fork the chain in an undetectable manner, the team behind the platform cannot steal the funds. From a censorship perspective, there is an escape hatch that forces withdrawal txns to L1 by submitting txns directly through L1 smart contracts, inheriting the censorship properties from Ethereum L1. However, the concern arises during congested network times which degrades quality of service. If, for any reason, a user is censored by the sequencer and the sequencer does not submit and batch the transaction, the user will have to go through L1 and pay the associated L1 cost, which could be high when the network is congested. This raises questions about the cost savings of submitting txns to L1 versus L2.

Moving on to the discussion of Miner Extractable Value (MEV), there is a question of whether Optimism and Arbitrum, as sequencers, have the exclusive ability to extract MEV, while all the other MEV has a first-come, first-served (FCFS) model, which is basically just probabilistic spamming to get txns in. Detecting this fault is one of the major concerns. It is likely possible to detect if Optimism is front-running all txns, but it is inherently difficult in blockchain to ascertain if Optimism controls a pool of addresses or any other entity involved in front-running. Verifying credible neutrality becomes challenging.

In existing platforms like Arbitrum and Optimism, funds can be withdrawn using an escape path mechanism. The centralized sequencer is not capable of signing a transaction through a private key.

Ben (Espresso): When it comes to security, it’s easy to oversimplify. Comparing centralized and decentralized systems is like comparing apples to oranges; one is not inherently more secure than the other. We need to unpack the details. In the case of rollups, for example, bridging from Ethereum and depositing ETH into a rollup, the question arises: can those funds be stolen? The answer is no, as there are escape hatches and ways to withdraw. Ethereum continuously verifies the state of the rollup, and there is a commitment on the Ethereum smart contract to the definition of the VM.

However, if we look at VM in the way that it is defined by the company running the VM, where the company can change the way it works and update the smart contract, the question becomes more complicated as we are now looking at the security and finality of txns within the rollup itself. While proofs are verified by Ethereum every few hours, if the rollup provides soft confirmations through a centralized sequencer and users rely on those for finality, there is a risk of the txns being reversed.

Jason: To summarize, there are three roles to the blockchain:

(1) DA Layer: Guaranteeing the availability of the data

(2) Achieve consensus on txn ordering

(3) Executing the txn

Why do rollups even need a sequencer? Why can’t rollups use L1 for both DA and ordering of data?

Ben (Espresso): As a matter of fact, L1 can be used for making data available and ordering it, but not for executing it. The role of the rollup would then be to report the result of state execution. It can either post the result and wait for someone to challenge it with a fraud proof, or it can immediately prove that its correct using SNARKs. Essentially, rollups are just separating ordering from execution. This raises the question of whether Ethereum, as L1, is ideal for making data available and ordering it, or is there room for improvement.

The use of consensus protocols for both DA and ordering involve fundamental trade-offs. A system can be dynamically available but have long latency, or it can provide optimistic responsiveness for faster confirmation but may not be dynamically available. Ben's perspective is that the same validator set can run multiple ordering services, allowing users to choose which one to use. This is the approach taken by Espresso, which uses EigenLayer to enable Ethereum validators to run an optimistically responsive protocol, giving users more options. However, users can switch back to using Ethereum once Danksharding is implemented and Ethereum improves its way of handling data availability in a more scalable manner (which it is not currently optimized for). At that point, Ethereum can become a reliable service for ordering and data availability.

Cookies: Given what Ben has mentioned, I am curious as to the longevity of decentralized sequencer networks. Reason being that if users can opt back to using Ethereum eventually, it seems that decentralized sequencers network are just a momentary fix, and not a permanent one.

Hence, protocols in the modular blockchain ecosystem, such as Celestia and Espresso, are working on building systems that are optimized for data availability and ordering. These protocols focus on these specific functions without the need to handle execution, which is outsourced.

Jason: Why would a L2 want to use an external sequencer given that a centralized sequencer can be seen as a honeypot for MEV. The concerns that Ben mentioned, such as censorship resistance and monopolistic behavior, are challenges that protocols aim to address and protect users from. However, from the perspective of L2 solutions, these aspects can actually be advantageous. So, what incentives exist for L2 solutions to use an external sequencer?

Ben (Espresso): In order to accrue value, a rollup needs users. Thus doing things that are better for users does indeed lead to value accrual to a rollup. This is especially so in a competitive environment where multiple rollups exist and users have the freedom to choose which to use. It's important for rollups to align with user preferences.

Moving sequencing to another system, whether it's Ethereum L1 or another system in the modular stack (such as Celestia, Espresso, or Astria) for handling data availability and ordering, doesn't mean that the value won't be shared back to the rollup. In fact, there is a strong incentive for communities like Celestia, Astria, Espresso, and their stakeholders to retain rollups because rollups can easily migrate elsewhere if they feel they are not receiving their fair share of value. This poses an economic allocation problem, which is not just present in blockchains but also in other verticals. For example, when multiple phone companies share a common infrastructure, how do they price services and allocate revenue. However, this problem is solvable. If the system as a whole is better for users, more interoperable, and creates more economic value overall, it results in a larger pie to be shared.

Josh (Astria): The decision of whether to use Ethereum L1 for sequencing ultimately boils down to user demand. For example, one could execute txns through Arbitrum or Optimism and post it to L1 → The L1 will provide availability → There will then be an off chain process to give state root → Followed by attestation to the validity of txn → And eventually a challenge mechanism.

However, fundamentally, users prefer faster block times and wants faster soft commitments. One reason decentralized sequencers might be valuable is the general perception that users gravitate towards solutions that offer a higher-quality UX visually and tangibly. They will choose the 2-second option over the 20-second option, without visually seeing and understanding the underlying tradeoffs that result in the speed difference.

One of the difficulties of cryptography and security is that users only become aware of it when catastrophic failures occur. Prior to that, users typically assume that everything is running smoothly. This poses the question of ‘If a system fails catastrophically, is it too late to inform users that they had unknowingly accepted certain tradeoffs?’

Decentralizing sequencers essentially aims to maintain decentralization and taps on the modular thesis. By focusing on specific roles or a smaller set of blockchain functionalities, sequencers can optimize performance. Decentralized sequencers provide decentralization guarantees. Even though it might not necessarily be as strong as the one provided by Ethereum since they do not have that larger validator size or economic weight, they still provide soft commitments that are more credible neutral than a centralized provider. This offers both an improvement in the decentralization aspect and a higher quality UX desired by users, serving as an alternative to centralized systems. It should be noted that if the only way for users to experience the high-quality UX is through a centralized system, it is likely that a portion of the users will still opt for that. Thus, providing decentralized alternatives with good user experiences is essential.

Jason: There is this concern that rollups using shared sequencers may not experience the same level of value accrual as rollups with their own sequencers. However, it seems that both Josh and Ben are of the view that consumers and users will eventually prefer rollups that decentralize their sequencers. As a result, these rollups are expected to attract more users and, consequently, accrue more value.

Josh (Astria): The above point of view by Jason is a rather optimistic one. Josh has a more cynical take and believes that users don’t actually care about decentralization

The assumption is that as the user base expands, new users may be less ideologically aligned with the decentralized ethos compared to early Bitcoin adopters in 2011. It becomes the responsibility of technology creators to reinforce the importance of decentralization and provide a good UX, even for users who may not prioritize decentralization.

Simply promoting decentralization alone is not enough to attract users. Active marketing campaigns are necessary to highlight the significance of avoiding points of centralization within rollups. This prevents users, who may not have the time or inclination to thoroughly consider these aspects, from accepting centralization as the norm simply because it has been in place for a certain period of time. Effectively addressing this issue requires a combination of marketing and public relations efforts to advocate for changing the unacceptable status quo. However, it is crucial to approach this in a way that does not force users to make significant trade-offs in terms of UX. Ultimately, it is the responsibility of technology creators to strike the right balance.

Santiago: Would regulation be the main catalyst here?

Josh (Astria): From an external perspective, it certainly plays a role. Internally, the community should be pushing for decentralization. If, for instance, the founder of Arbitrum or Optimism were to be arrested, there would be a strong push to decentralize sequencers. However, we can also altruistically move towards the industry's ideals without being directly threatened by external regulatory bodies.

Ben (Espresso): It's not that users don't care about decentralization or that decentralization is solely for regulatory arbitrage. Users like and value the benefits that stem from decentralization. Even if they don't explicitly mention that they care about decentralization, users appreciate the fact that all apps on Ethereum are interoperable and composable with other apps, which is made possible because Ethereum is a decentralized system where everyone shares the same platform. This is not feasible if all rollups use a centralized sequencer. Hence, if decentralization is indeed a pre-cursor for greater interoperability between rollups, then users do value decentralization as they value shared liquidity and the ability to seamlessly transact across different ecosystems. Otherwise, users might have funds on zkSync (or any other ecosystems) and won’t be able to transact anywhere else.

Cookies: We can take the example of the subway / train. Without composability, when passengers switch from one line to another on the subway, they would be required to use a different card, despite all bring the same form of transport. In that case, I would have to keep so many different cards on me all the time to be able to take the subway smoothly. This taps into a point that Santiago and Jason mentioned in one of their earlier podcasts, where technological effects have to be apparent and tangible for users to notice it, with the example brought up being ChatGPT. In this case, given that users experience this composability and seamless txns upfront, it is likely that they will value this benefit of decentralization.

Ben (Espresso): PBS (proposer-builder separation) doesn't work with centralized sequencers, as there is no incentive for a centralized sequencer to collaborate with a block builder. This results in myopic behavior where validators prioritize immediate gains over long-term strategies to maximize gains. To put things into perspective, a service like Flashbot guarantees users that their transactions will not fail. A centralized sequencer, on the other hand, may charge users for failed transactions, as it can be more lucrative for them.

Santiago: To be fair, centralized sequencers do compromise the UX over the long term. Should users be interacting with Arbitrum and Optimism, and at some point these networks start getting greedy, users will migrate to other L2s. It is a quasi competitive market that is fragmented at the moment. Hence, when a L2 acquires users they can get away with some slack until it becomes a critical issue where bulk of the users migrate over to other L2s.

Ben (Espresso): Can competition among blockchains alone achieve decentralization? Perhaps decentralization wouldn't be necessary if all blockchains were perfectly competitive and offered the same functionalities. For instance, if Solana was a perfect substitute for Ethereum, competitive pressure on miners would discourage front-running of users. However, the fundamental challenge lies in the fact that internet services are prone to centralization due to network effects. If we aim for long-term resilience, it becomes crucial to decentralize the operation of internet services.

Santiago: What is the state of your development? Specifically, in terms of the UX changing once the sequencer has been decentralized, where are you guys in this process to make this happen. How does UX get impacted in the beta phase?

Josh (Astria): Astria’s code is public, but there is no public testnet yet. Currently, the focus is on internal developer network to test how users can submit their transactions within the testnet. The process is structured to be somewhat similar: Users submit transactions → Connect to an Ethereum node's RPC endpoint (Customizations are made to the Ethereum node to ensure compatibility with the shared sequencer) → Their transactions are included in a shared sequencer → The node can then read when a block is included, which operates based on the block time of the sequencer, determined by factors like the consensus algorithm and MEV timing tradeoffs. In terms of UX, it is largely the same as the current experience.

However, the main tradeoff occurs within the rollup itself. Once decentralization is implemented, there is a higher coordination overhead required for ongoing development.

For instance, if Optimism wants to make changes to their system, they can quickly and easily update the single sequencing node defining the canonical chain. This is similar to how any centralized service can roll out a hotfix within minutes. In contrast, incidents like the Dragonberry exploit on Cosmos impacted the general Cosmos SDK, requiring a significant coordination effort among all affected chains to update. While the UX can still have a structurally similar flow, it will differ from a first-come, first-served (FCFS) approach. Overall, the UX is expected to be relatively similar to the current process from an end user's perspective.

Santiago: Who are these sequencers? Are these the same folks that are behind Lido doing validation or are they enterprises, or typical single sophisticated developer out there. Are they industrialized validators or individuals?

Ben (Espresso): Building a decentralized sequencing layer is comparable to building a decentralized L1. Although the validator set is not yet established and is still being developed, the team at Espresso is excited to collaborate with EigenLayer. It's impressive that Ethereum already has 30,000 validators, although the actual number of distinct validators may be around 10,000 or fewer.

The key point is that Espresso's approach begins with the premise that Ethereum has a decentralized validator set, which is ultimately what Espresso aims to scale. Two different protocols building on Ethereum can share the same physical node set. It makes perfect sense to involve Ethereum validators in operating the protocols that runs on L2s. Otherwise, there could be an economic misalignment between the L1 and L2.

EigenLayer and restaking, in general, provide a clever way to subsidize users entry into the system. If they already have ETH staked, there is no need to invest in new tokens or capital to participate in the new service. It serves as a brilliant bootstrapping mechanism for acquiring a decentralized physical node set to run a new protocol that offers different properties from Ethereum. This novelty is not due to a modification in the physical nodes themselves but rather a shift in how the protocol operates and what it optimizes for.

Espresso’s focus is on developing a decentralized sequencing layer that can be utilized by various rollups, whether they are sovereign or existing major L2s. The approach Espresso is taking involves leveraging the Ethereum L1 for ordering and availability. In fact, Ethereum is actively working on improving its functionality as a DA and ordering layer. However, Ethereum is bound by a specific set of principles and tradeoffs in its consensus design. One of these tradeoffs is extreme dynamic availability, meaning the system remains operational even if only 10% of the nodes are online. While this ensures liveness, it results in minutes-long latency.

One reason why rollups have centralized sequencing is because users prefer the experience of a centralized server. This is in contrast to dynamic availability, where if the single server goes down, the entire system is affected. However, a centralized server offers low latency and fast confirmations. Therefore, we are striving to develop a decentralized consensus protocol that can still scale to the same physical node set as Ethereum but optimize for features typically associated with centralized sequencers, such as optimistic responsiveness. We aim to engage the Ethereum validator set to run this protocol as an alternative. Different rollups may choose between running on the Ethereum base layer for sequencing and availability or utilizing the protocol we are designing, called Hot Shot, which offers optimistic responsiveness but requires Hot Shot nodes. The progress of this protocol will depend on achieving a 75% online participation rate.

Jason: Josh, given that you spent 4 or 5 years in Google, are there any analogies to Web2? It seems that everyone is trying to do everything in-house right now. You see a world where there is specialization and it’s almost like paying amazon for AWS, where everyone used to run their own data centers but eventually we will all live in a world where there is AWS and Azure. What are these analogies that we can perhaps look towards?

Josh (Astria): We can think of it as a decentralization as a service that can be easily integrated into existing systems, similar to buying something off the shelve. The goal is to avoid the scenario where a rollup starts as centralized and only considers decentralization as an afterthought, prioritizing other aspects in a rapidly evolving ecosystem. By offering decentralization as a readily available solution, we hope to foster innovation and solve problems more effectively. This approach is similar to cloud services, which come with tradeoffs (centralization, paying hefty fees etc.) but allow companies to focus their efforts on specific areas. It enables smaller players to achieve comparable uptime to large corporations without the need for extensive infrastructure teams.

Ben (Espresso): The analogy of AWS works to some extent when it comes to companies using scalable database systems. They can leverage AWS instead of building their own distributed system. Similarly, rollup companies can utilize Astria or Espresso instead of building their own consensus protocol and decentralizing their sequencers. However, the analogy breaks down when we consider the bigger picture of the blockchain concept.

In the past, people questioned why they should build their own blockchain for their app instead of building on an existing blockchain. This is where the AWS analogy does not apply as people don’t use AWS to communicate with other services that are running on AWS. The success of Ethereum, where applications can plug into a shared VM and interact with other applications, highlights the value of shared infrastructure and state. This shift led to the boom in DeFi and NFTs, where economic value is derived from the shared state. If every rollup operates on its isolated system without interoperability, it resembles the early days of competing L1 blockchains building custom solutions for their specific apps. The potential for economic value lies in sharing infrastructure, state, and the sequencing layer, although this step alone is not sufficient, it significantly facilitates collaboration and value creation.

Jason: What are the economic implications of decentralized sequencers?

Josh (Astria): In a centralized sequencer system like Optimism or Arbitrum, users have to trust that they are not being front-run. With a decentralized sequencer set, there is a credible neutral system, where the nodes in the sequencer network compete with each other to some extent, almost like an intra network competition. Each validator or sequencer aims to be the most profitable and maximize returns for staking or delegation. This competition leads to MEV as a means to capture more rewards. Since there is inherent value in transaction ordering, it is in the sequencers best economic interest to extract that value. The question then becomes whether they keep the value for themselves or return it to delegates. How this dynamic plays out is a larger question, especially when there are multiple rollups, each claiming to contribute economic value to the chain. There are discussions on whether any MEV extracted should be returned to the rollup or distributed to the users. Determining the allocation of rewards and establishing clear boundaries between different rollups to determine their share of the rewards are open questions that require further investigation and evaluation.

Ben (Espresso): What if every rollup operated on the same sequencing layer without interoperability or cross-rollup transactions? Ben believes that it's not the shared sequencing layer itself that complicates matters, but rather the existence of cross-rollup activity. Assuming each rollup used the sequencing layer for ordering transactions (as they would with the DA layer), there would be infrastructure costs incurred. While the sequencing layer determines the transaction ordering, it is clear that transactions from one rollup are isolated from another. The value extracted through user bids on ordering preferences, known as MEV, represents a form of fee accrual based on combinatorial preferences rather than simple demand for gas. This value can be allocated entirely to the rollup, and it is up to the rollup to decide how to distribute it among their stakeholds: proofers etc. This forms an economic contract between the sequencing layer and each rollup. Complications arise when rollups leverage the infrastructure that enables cross-domain activity, which is what users want and is inevitable. Shared sequencing makes it easier for this cross-domain activity to occur. Even without shared sequencing but with bridging, challenges would arise when bridges handle transactions that should be executed on both ends. Determining how to divide the fees paid to the two parties becomes trickier in terms of economic allocation, this is a fascinating economic research question.

Jason: Nick White tweeted about the shared sequencer paradox:

  • Option A: Extract MEV but be less attractive for rollups to use

  • Options B: Don’t extract MEV but lose value capture

What do you guys think about it?

Josh (Astria): There’s the question of real-time calculation and how to determine the distribution of MEV across multiple rollups. When only one rollup exists, the MEV is specific to that rollup, and the same applies to another rollup. The messy economic questions arise when considering how much of the extractable MEV is created by the existence of both rollups and how to calculate their respective contributions. Determining the percentage of MEV extracted across a sequencing layer becomes challenging.

The question of whether an auction mechanism for ordering is morally illegitimate comes into play. If you don’t think that it’s morally illegitimate, then you would rapidly accelerate towards all MEV being extracted. The factors to consider are MEV share, order flow auctions, and MEV walls, which empower users to specify their preferences upfront and avoid significant slippage caused by front-runners and sandwich attacks. Negotiation can occur in this market, where users can submit low slippage transactions, knowing that they may take longer to be included. Shared sequencers are expected to extract the majority of profitable MEV for builders and searchers. However, the bigger question is still how the revenue will be shared after being extracted.

Ben (Espresso): Ben doesn't think that there is a paradox. He believes that sequencing layers, similar to other decentralized systems like Ethereum, can profit from users who pay for their ordering preferences, which is essentially what MEV is. The key question is how this MEV is allocated among the rollups that have chosen to utilize these sequencing layers.

Jason: Are rollups with different block times still composable if they have a shared sequencer?

Josh (Astria): This is an important point to note. Astria takes the view that the shared sequencer determines the block time for rollups that utilize it. The shared sequencer operates by creating mega blocks, and when a rollup utilizes it, they extract a subset of that mega block, thereby using the shared sequencer’s block time.

Ben (Espresso): Ben shares the same perspective. An application can choose to build on top of the shared sequencer and introduce a different level of latency. For example, a builder for a specific rollup could choose to take longer to build a block by buying up the slots to process for a designated period of time. This would result in a longer block time for the rollup. These kinds of guarantees and restrictions can be implemented at the application level as well. For instance, an application on Ethereum could have a longer block time to accommodate certain actions within the app. Ben gives an example of a voting protocol where votes are submitted for a day, causing the app running the selection to have a latency of a day. However, this app would still operate on a data availability and ordering layer that might even have instant finality.

Jason: Its a small space and its so early that you guys (Ben and Josh) are doing podcasts together. You guys are trying to push the narrative of shared sequencers together. Eventually the pie will grow larger and Astria and Espresso will end up competing with live products in the market. What is the working model of the timeline?

Ben (Espresso): Having been in the industry for a considerable time, the good thing is that many builders in the space have a positive sum attitude. These are the early stages of technology development, in particular for the scalability of L2s. Even if shared sequencers become competitors in the future, they continue to collaborate on a shared mission of promoting the idea and convincing others of its benefits. This cooperative spirit extends to research questions, fostering a sense of friendly competition among them.

Josh (Astria): There is the important question of ‘Is there a technical blocker of existing centralized sequencers adopting decentralized sequencing?’ Josh’s answer to that is ‘not really'. Tapping on what Ben said, setting up a decentralized sequencer network resembles the process of setting up a new L1, where there is a need to acquire validators (in this case this will be achieved by leveraging on Eigenlayer) → Establish a new network → Coordinate network operations → Run network and produce blocks. This process is not a novel problem, as evidenced by the numerous Tendermint chains launched in a year.

Astria is looking to launch within a year.

Drawing from his experience in big tech and open source domains, Josh highlights the benefits of a competitive environment. With industry giants like IBM, Google, AWS, and Azure building similar solutions and constantly presenting their solutions, it can be observed that competition is beneficial for the industry as a whole. This healthy competition will play a role in driving innovation and pushing the boundaries of functionality in the crypto and decentralized network space. This competition acts as a catalyst, urging participants to improve their offerings and drive progress throughout the industry.

Jason: Josh mentioned that restaking is exciting but something will get blown up because of restaking in the next cycle. Could you expand on this?

Josh (Astria): Restaking is essentially rehypothecation of risk. The concept here is that a certain amount of economic stake is used to carry out multiple things, which all has the potential of going into failure mode. When this happens, there is the ideological question of did they mess up (fault) or were they actively malicious (fraud). The system is tested as we do not know how many layers of applications will be restaked with the same amount of economic weight.

Josh has a cynical take that over a longer timeframe, the system will be stretched to limit, experience failure mode and thereafter enter a pendulum (back and forth) phase, essentially consistent testing.

While restaking offers the potential for new development opportunities with strong economic guarantees, there is a possibility of certain applications pushing pass the boundaries and messing up, which will necessitate parameter adjustments. This phenomenon is similar to adjustments made in other economic markets, where the change in interest rates will lead to certain level of economic engagement. However, this might eventually lead to a collapse which will then require a new adjustment of the deposit ratio and other metrics.

However, he also acknowledges that decentralized networks are inevitable, and no one can prevent their implementation. If the system is desirable, it will be embraced by the ecosystem, which will navigate the challenges and dynamics that arise.

Ben (Espresso): Ben disagrees with the point of view brought up by Josh. He believes that restaking is fundamentally different from bring overleveraged in other economic scenarios. On the contrary, Ben thinks that restaking subsidizes participants who have already locked up capital to one protocol, giving them the right to participate in another protocol.

There are multiple factors to consider when assessing the risks involved. It’s important to first distinguish whether its a

  • Personal risk taken by individual validators who are now participating in multiple protocols or

  • Systemic risk with a potential contagion that could lead to multiple validators experiencing errors and being slashed when they should not be.

Slashing is not intended for unintentional errors. We don’t slash for liveness as that is fundamentally very risky. Slashing is meant to enhance safety by penalizing nodes that intentionally deviate from the protocol. An example would be double-signing messages, which could occur if the node is hacked / compromised. However, this is something that we want to slash for as it is a form of corruption and is not accidental.

This increases the incentive for nodes to ensure such incidents do not happen. Ben argues that these faults are not independent and uncorrelated across different systems. The measures taken to prevent accidental errors are correlated and interconnected. Restaking, therefore, poses a local risk rather than a systemic one. If a party is slashed due to an accidental incident and their economic stake is redistributed, it does not fundamentally change the security properties of the system.

Conclusion

Santiago: When will we see decentralized sequencers hit the space?

Josh (Astria): we will have significant progress by EthCC. We will have decentralized sequencers within a year. It’s not that challenging from a technical standpoint, it is not wildly unsolved. it is more of prioritization and the burn down of other priorities is the catalyst.

Ben (Espresso): Building any of these systems are hard. Decentralized systems are hard to begin with.

We will have decentralized sequencers within a year but it may take time for the system to fully mature in terms of the after effects:

  • Are we going to solve mechanism design for this space?

  • Are we going to have fully operational bridges across all rollups running on shared decentralized sequencers?

  • Are we going to have PBS perfectly solved?