Dynamic Availability: Protocol-Based Assurances

IOTA 2.0 Introduction Part 12

TL;DR:
Most distributed ledgers can be either safe but slow or fast but vulnerable. IOTA 2.0 is a dynamically available protocol, which means the network won’t halt even if a majority of nodes disconnect; instead, it keeps including transactions, confirming them once validators are back online. This allows users to find their own balance between caution and optimism according to their expectations of the ledger.

When you interact with a distributed ledger, you're likely to have two fundamental expectations. You want assurance that your submitted data will eventually find a place on the ledger, and that, once included, the fate of your data will be definitively determined. While this two-fold expectation might seem obvious in the context of distributed ledgers, the reality is that many blockchains are only capable of fulfilling one aspect of this promise (as discussed in this paper from the 2021 IEEE Symposium on Security and Privacy).

One of the standout features of IOTA 2.0 is its ability to ensure dynamic availability. This enables you to have your transactions seamlessly processed despite any transient faults that may arise in the consensus protocol, while at the same time ensuring that your transactions will be finalized definitively by the consensus protocol.

In this blog post, we’ll explore dynamic availability and uncover the reasons why achieving both dynamic availability and definite finality simultaneously can be challenging – but it’s a challenge that IOTA 2.0 addresses effectively.

Unreliability of Networks

To understand why achieving dynamic availability and definite finality simultaneously is difficult, it’s important to recall an often-overlooked reality of today's computer networks: their inherent unreliability. Computer networks comprise layered structures where each layer is susceptible to numerous kinds of failures, ranging from hardware faults in network devices to anomalies in the physical medium or unforeseen misbehavior in communication algorithms.

Implications of network unreliability, combined with the likelihood of intentional adversarial behavior by peers in the network, give rise to some important communication challenges. For example, let's consider a scenario where you establish a direct wired connection with someone in a peer-to-peer communication setup. After forming the connection, you send a message to your peer, asking "Are you there?" to which you receive no response. Given the unreliable nature of the communication medium, it becomes impossible to distinguish between two potential situations:

  1. Your peer is an honest participant and sent a YES answer, but you didn’t receive the message due to network faults or insufficient waiting time. (Note: In theory, waiting time is never truly long enough to guarantee receipt.)
  2. Your peer is an adversarial participant and chose not to send you a response even though the connection functions flawlessly.

This example highlights the core challenge posed by network unreliability: it becomes impossible to ascertain the honesty of participants within the network. This means that in any peer-to-peer network, it isn’t possible to ascertain either the existence or the number of adversarial nodes. Moreover, if any adversarial node exists, it can provide its peer with conflicting answers, hindering any collaboration between nodes. This is the reality of how the Internet operates, and distributed ledgers that are built on the Internet are no exception to these challenges.

Collaboration between distributed nodes is a challenging task. Several approaches to this problem have been studied for the past 50 years. Let’s delve into these approaches, analyzing them in two categories: ensuring finality and ensuring availability

Ensuring Finality: Voting-Based Consensus Protocols

The first category of solutions relies on the supermajority of a predetermined set of nodes in the network, known as voting-based consensus protocols. These protocols rely on two key assumptions:

  • Each node participating in the consensus protocol has knowledge of the total number of nodes in the protocol.
  • There is an upper bound on the number of adversarial nodes, which is known to all participants. (Note: It has been theoretically proven that consensus is not achievable when more than one-third of the nodes are adversarial .)

In distributed ledgers that rely on the assumptions above, consensus between nodes on which data should be added to the ledger is achieved by seeking approval from a supermajority (in other words, more than two-thirds) of the peers. This requirement ensures that honest nodes will never have conflicting views of the ledger. Consequently, the approved data can be written to the ledger irreversibly, meaning that the transaction (or any other considered data) is finalized.

To grasp the behavior of voting-based consensus protocols under dynamic participation, let’s consider the following scenario. Imagine a distributed ledger maintained by a network of 10 peers. These peers decide on the inclusion of transactions in the ledger using the aforementioned procedure: a set of transactions is written to the distributed ledger if it receives confirmation from seven peers.

Now, envision a situation where four of the peers lose connectivity with the remaining six due to a network issue. Fortunately, all six remaining peers continue to benevolently run the consensus protocol. However, since the disconnected nodes cannot determine whether the problem lies in the network connectivity or if the candidate transaction set lacks sufficient confirmations, no transaction set can be written to the ledger until the connection problem is resolved. In other words, the ledger suffers a loss of availability.

To summarize, voting-based consensus protocols used in blockchain solutions such as Algorand, Tendermint, Avalanche, and SUI ensure that whatever is written on the ledger is definitely finalized, while the consensus protocol may halt the processing of transactions in case of a lack of sufficient numbers of pre-determined participants.

Ensuring Availability: Proof-Based Consensus Protocols

Before introducing the second set of consensus solutions, it’s worth mentioning that consensus among untrusted peers with dynamic participation was once widely believed to be impossible, as this 2010 paper on multiparty computation makes clear. This notion held fast until a groundbreaking idea emerged, bringing together a combination of established techniques such as proof of work, hash chains, and Merkle Trees. This solution, known as the Nakamoto Consensus, marked the inception of the blockchain revolution with the introduction of the Bitcoin blockchain.

The Nakamoto Consensus algorithm relies on a simple principle. Participants of the consensus, called miners, compete with each other to solve a cryptographic puzzle (i.e., proving their computational resources), without needing to know how many miners exist in the network. The puzzle-solving procedure is similar to a lottery in which the probability of winning for a node is proportional to the computing power of the node. The solver of the puzzle is given the right to determine the set of transactions to be written to the next block of the ledger.

The distinctive feature of proof-based consensus protocols is the fact that the protocol continues to function even when there is only one miner. Therefore miner nodes are free to leave and re-enter the competition at any time. Thus, the protocol maintains availability even under undesirable network conditions.

To deal with cases where there are multiple leaders (concurrent solvers of the puzzle), honest nodes follow a simple rule: select the ledger with the highest number of blocks (i.e., the longest chain). In cases where chains have equal lengths, pick the one that you witnessed the earliest.

Note that, in the given scenario, there is no way to determine whether there is a set of adversaries that are processing a parallel ledger without informing the rest of the network until their ledger becomes longer than the chain of the benevolent node. When they have a longer chain, they reveal their chain, waiting for the rest of the network to adapt to it, thus effectively ignoring all transactions that were in the neglected blocks. Due to this, one can never be sure whether a transaction is irreversible.

Bitcoin and proof-based consensus protocols rely on probabilistic finality: In the absence of malicious nodes, the probability of a transaction in a block being revoked reduces exponentially with the number of succeeding blocks. A recent analysis shows that a transaction contained in a Bitcoin block that is six rounds older than the newest block may be successfully reverted with a probability between 0.11% and 0.16% by an adversary controlling 10% of the mining power. Cardano and Proof-of-Work-based Ethereum are other examples of blockchains with probabilistic guarantees of finality.

Picking the Best of Both Worlds: Flexible Consensus

As we’ve seen, the two categories of consensus have their strengths and weaknesses. While voting-based consensus protocols prioritize consistency over availability, Nakamoto-like proof-based consensus models ensure dynamic participation at the cost of definite finality.

In distributed systems, it is well-known that achieving both dynamic participation and guaranteeing safety under network partitions is impossible, as this paper from 2002 argues.

However, it is possible to design a consensus protocol in which two types of consensus models coexist and to let you the user determine the level of safety on the spectrum between definite finality and dynamic availability.

This is the approach taken by IOTA 2.0.

IOTA 2.0: Definite Finality and Dynamic Availability

As we explored in Part 6 of this series of blog posts, IOTA 2.0 takes a layered approach to consensus. Blocks that are issued by users and submitted to the network are guaranteed to be included in the Tangle and receive a quicker acceptance flag, while an ongoing consensus mechanism eventually decides on their irreversibility. This approach is akin to having a finality gadget running alongside a dynamically available block creation procedure, as implemented in Ethererum (Proof of Stake) and Polkadot blockchains.

Thus, in IOTA 2.0, it is up to you the user to determine the level of safety. Upon revealing their transactions to the network and receiving a quick acceptance, you can start to optimistically build on top of accepted transactions before the consensus protocol takes the final decision on your transaction. Conversely, a more conservative user can opt to wait for their transactions to receive the finalized flag before taking any action dependent on the accepted transaction. Needless to say, you can dynamically adjust your level of caution when using IOTA 2.0: continue building on top of your low-risk transactions as soon as they are accepted, and wait for your high-risk transactions to be finalized before proceeding.

The next blog post in this series presents the sustainable tokenomics of IOTA 2.0.


Join the conversation on X


IOTA 2.0 Introduction

Part 1: Digital Autonomy for Everyone: The Future of IOTA

Part 2: Five Principles: The Fundamentals That Every DLT Needs

Part 3: Data Flow Explained: How Nodes Process Blocks

Part 4: Data Structures Explained: The Building Blocks that Make the Tangle

Part 5: Accounts, Tokens, Mana and Staking

Part 6: A New Consensus Model: Nakamoto Consensus on a DAG

Part 7: Confirming Blocks: How Validators Operate

Part 8: Congestion Control: Regulating Access in a Permissionless System

Part 9: Finality Explained: How Nodes Sync the Ledger

Part 10: An Obvious Choice: Why DAGs Over Blockchains?

Part 11: What Makes IOTA 2.0 Secure?

Part 12: Dynamic Availability: Protocol-Based Assurances

Part 13: Fair Tokenomics for all Token Holders

Part 14: UTXO vs Accounts: Merging the Best of Both Worlds

Part 15: No Mempool, No MEV: Protecting Users Against Value Extraction

Part 16: Accessible Writing: Lowering the Barriers to Entry