Efficient and Robust Rollup Batch Posting Strategy

Offchain Labs
Offchain Labs
Published in
5 min readFeb 21, 2023

--

Short Overview: Non-technical introduction to the question of timing when to post layer two calldata on the Ethereum base layer. Efficient and robust posting strategy that minimizes the costs of posting data and keeps delays low.

The Ethereum blockchain satisfies high security and decentralization requirements, but it is not scalable. Namely, transaction fees are too high for running computationally heavy smart contracts. Also, it can only record a low number of transactions on each block. To improve scalability, several different solutions were proposed. Among them, one of the most successful solutions is a type of layer 2 (L2) network called optimistic rollup protocols. Several different designs for rollup protocols that build on top of Ethereum were proposed — one example being Arbitrum. While the design of these protocols of how heavy computation is delegated to them are different, they share one thing in common: from time to time they post (compressed) batches of transactions on the Ethereum blockchain, referred to as a base layer or a layer 1 (L1). These recorded transactions on the base layer can later be used as a reference for rollup protocols and smart contracts, governing cases of dispute regarding the state of execution of transactions.

Rollup protocols (layer 2s, for short) create new economic design questions to solve. In this project, we address one of them. Namely, we discuss posting data batches of layer 2 rollup chains on a base layer which constitutes most of the costs the rollup chains incur. User transactions sent to the rollup protocol are grouped together with fixed frequency and then compressed, creating a batch. We study the question of how to decrease batch posting costs in this project. We try to find an optimal strategy for when to post transaction batches as the L1 price of posting data fluctuates. Such an algorithm adds to the robustness of the system, and overall, to its security. The trade-off is clear: avoid posting batches when the price is high, but at the same time, avoid too much delay in posting.

The question is motivated by historical experience with the Ethereum gas base fee fluctuating in a partially predictable way, and, on occasion, facing intervals of very high base fees when a major product is deployed on it. One memorable instance of this involved a base fee of more than 100 times the norm for a several-hour period. At least one rollup protocol, Arbitrum, decided to take manual control of the batch posting policy during that instance. Such intervals are neither too frequent nor too long, which gives hope that after some delay, posting costs will eventually lower, and posting can be resumed.

We decompose the batch cost in two parts: a posting cost that is directly observable when posting batches, and a delay cost that is not directly measurable. More concretely, we interpret the total cost as the sum of posting and delay costs.

Delay costs have several components:

  • The first is psychological: users do not like when batches haven’t been posted for an extended period of time, as it may suggest that components of the system are down or unavailable.
  • The second part is related to delayed finality: L2 transactions are not fully finalized until they are posted as part of a batch and that batch posting has finality on L1. Until then, users must either wait for finality or trust the L2 system’s sequencer to be both honest and non-faulty, potentially causing a delay in finality. This delayed finality imposes costs on some applications.
  • The third part is related to the specific technical nature of transaction fee computation on L2 rollups: namely, a transaction fee is calculated when transactions are created, not when they are posted on L1. Therefore, more delay causes less precise estimates of the L1 cost to attribute to L2 transactions, thereby increasing the risk of unfair or inefficient pricing of rollup transactions.

We model the problem as a Markov decision process. In each round, we calculate total costs independently from the past rounds. Each round is characterized by the current queue size and price, which constitutes a state. The price in the next round is modeled as a random variable, which depends on the current price. Depending on the strategy in the current round, the random variable indicating the price in the next round moves the state to the next state. For practical implementation, we discretize the price space and use a discrete approximation of a continuous random variable. In order to solve the optimization problem of finding the optimal strategy in each round, we use tools from dynamic programming such as q-learning. The structure of the solution allows us to design a practical batch posting algorithm, which turns out to be intuitive and simple. The algorithm is characterized by only two parameters. We test the algorithm against a few natural benchmark algorithms, on the previous year’s Ethereum base fee data. The data covers the full period from the introduction of EIP 1559 in August 2021 until November 2022. This back-testing allows us to find optimal parameter sets for all algorithms and compare their performances.

Related work

The optimization problem at hand is very similar to the inventory policy (IP) problem, long studied in economics and operations research literature (ref [1], [2]). In IP problems, the newly produced items need to be sold for some price and the demand price distribution is given. While the IP optimization problem is to maximize revenue — which is achieved by maximizing revenue from trade and minimizing maintenance costs — our problem is to minimize both costs. The only difference between our optimization problem and IP problems is that the delay cost in IP problems is linear in inventory size, such that their cost is interpreted as the maintenance cost of stored items, whereas we model the cost as superlinear in the number of delayed items. The optimality of pure stationary strategies in a wide range of Markov decision processes is shown in [3]. The convergence of the dynamic programming algorithm that covers our case as well is discussed in [4].

Results and conclusions

The result of our search is an efficient and robust algorithm. Namely, the algorithm that keeps the average and maximum queued number of batches tolerable enough improves the posting costs of the trivial algorithm, which posts batches immediately when they are created, by 8%. On the other hand, the algorithm that only moderately cares about the batch queue length can improve the trivial algorithm posting costs by 29%. That is, we obtain efficient algorithms with robustness guarantees. In each round, the new algorithm does not post too many batches and the number of batches kept in the queue is bounded by a function of posting price in each round. Our findings can be used by projects that post data to the base layer at some regular rate. Future avenues of research include the optimization problem where current and future prices depend on the number of batches posted in each round. This may be the case if rollup protocols become dominant in the scalability of the base fee. Finding out the optimal constant tip is also left for future research.

For more technical contents and exact details of modeling/implementation, check out our paper: https://arxiv.org/pdf/2212.10337.pdf

--

--

Offchain Labs
Offchain Labs

We’re Building Arbitrum, a scaling solution for Ethereum. Learn more at https://offchainlabs.com/ and http://arbitrum.io/.