This month has been quite busy for us, as we enter the home stretch on our way to delivering Nectar, the first feature-complete implementation of our fully decentralized, Coordinator-free network, due in a few weeks! Our groups have all been making excellent progress in turning our research results into usable components, a goal which we are proud to play our part in achieving.
A few important achievements this month have furthered progress toward releasing the Nectar stage of our Coordinator-free network, including: implementing mana in the Pollen testnet, which allows us to study attacks under realistic network conditions; nearing completion on our formal specifications, which allows us to communicate the protocol better to partners and other interested parties, as well as providing the groundwork for the standardisation of the IOTA protocol; and completion of our autopeering paper, which is a validation of one of our network components. Below are updates from our research groups detailing these and other advances from the past month:
Pollen Testnet Implementation. The team rolled-out several iterations of the Pollen testnet, starting with v0.5.0 that introduced mana (the reputation system), as well as congestion control (which regulates access to the Tangle), to test its first iteration and study its distribution in such an environment. With that release we added a series of new APIs, a new mana section on the local dashboard, the Pollen Analyzer dashboard as well as on the Grafana dashboard. Both the GUI and CLI wallets have been updated to allow the user to define the identity of the node as the receiver of the access and consensus mana pledge of a transaction.
We have also refactored the Consensus Manager component to be agnostic with respect to the actual consensus mechanisms implemented. In this way, GoShimmer can be seen not only as the IOTA 2.0 prototype but also as a flexible framework for any DAG-based DLT. In fact, any consensus mechanism, such as On Tangle Voting, BFT, leader based and more, can be plugged in as easily as changing just a few lines in the Tangle initialization. With subsequent releases, we started integrating the first components with mana, namely the Fast Probabilistic Consensus (FPC protocol) and the autopeering. So far our consensus module has treated the opinions of every node equally. However, this offers no protection against Sybil attacks, where adversaries run hundreds or even thousands of nodes, to disrupt the consensus. With the integration of mana, nodes now form their quorum by selecting nodes to be queried proportional to their mana.
To prevent/limit sybil-based attacks, the autopeering makes use of a screening process called mana rank, that selects a subset from all the known nodes based on their mana similarity. That means that nodes with similar mana will be more likely to become neighbors. We can already see its impact in the Pollen network topology:
Nodes in the left part of the network graph have more consensus mana whereas in the right we can see nodes with less or zero mana. With this experiment we are looking at finding the right parameters to find a good balance between sybil protection and network connectivity. Currently, we have a network with more than 200 nodes and a network diameter of 5.
In the coming weeks, the team will focus on integrating mana with the rest of the modules: Congestion control, Finality (approval weight), Reorg and dRNG. As the last milestones before Nectar, we will introduce Snapshots and Timestamp voting. Once these are finished, Nectar will be ready to launch to the public.
Specifications. The progress of the specifications has been steady, and now we work with the objective to have the main components ready around the time we upgrade to Nectar. The whole process is being completely documented and standardized, so we can better control how the progress is going as well as knowing which modules need further support.
Let us talk a bit about the structure of the project. Right now the files are divided into 7 sections related to their role in the project or in the network: Control Files, Structure Files, Network Layer, Communication Layer, Value Transfer Application, Consensus Application and History Management.
In Control Files we are dealing with the information that refers to each file, defining standardization guidelines and organization. Here it is worth mentioning the index of parameters and terminology files, that will get this information through the files on a single reference place. Structure Files includes essential information about the data elements that are part of the Tangle. Here one will find the brand new Message Layout, the main Payload Layouts as well as the data flow a message passes through in order to be processed by a node.
Network Layer is about the communication between nodes, including bootstrapping methodology and all the Autopeering modules. Communication Layer, likely our most extensive section, shows the protocol for a node to maintain its Tangle. This includes a majority of the modules about the Tangle itself, such as timestamps, tip selection algorithm, solidification, rate and congestion control and markers. Value transfer application defines the directives for anything related to the Ledger, as the UTXO systematics, ledger state calculations and all the directives about mana.
Consensus applications, as the name suggests, is the specification about everything that is related to consensus. Here one will find all information about the fast probabilistic consensus (FPC), decentralized random number generator (dRNG), node perception reorganization and finality. Finally the history management will describe in detail the process of snapshotting.
This shows a bit the extensive nature of the project. From our initial batch of 20 files, 7 have already passed through revision and many others are near completion. We expect a big advancement during April and hope to share this with the community soon.
Pollen Testnet Study Group. During the last month we finished writing and submitted our first research paper on our salt based autopeering. This paper discusses several of our design decisions and presents a first quantitative treatment of some requirements on network topology. Having the mana integrated in the autopeering in Pollen we continue our research on this module of coordicide.
We also analyzed the distribution and dynamics of access mana and consensus mana in the current testnet. One particular outcome of this first analysis is an adjustment of the “smoothing parameter” of access mana; a change in access mana becomes effective in much shorter time than in our first proposal.
Networking. This month we worked on a strategy to synchronize nodes that are temporarily offline or newcomers joining the network for the first time. It is very likely that these nodes missed many messages that should have already been processed and appended to their local version of the ledger. When using the fixed scheduling rate, nodes may experience long delays before getting in sync. In our proposal we enable a higher scheduling rate in the case when a node is labelled as “out-of-sync”: in the current version of GoShimmer, beacon messages are used to detect out-of-sync nodes, but we are planning to build a mechanism similar to a leaky bucket (increasing rate opportunistically) which automatically recognizes network disruptions or packet losses.
The second main topic of the month was to fine-tune the congestion control algorithm. In particular, we performed extensive attack analysis and noticed that attacker’s neighboring nodes may end up with an adulterated perception of the current traffic congestion. Underestimating the congestion leads to an increase in the throughput allowed by the rate setter. We are currently working on countermeasures to deal with this corner case. We would like to highlight that this scenario is only relevant in case of full utilization of the network throughput. Hence our protocol will not be affected in the immediate future.
Sharding. We are continuing to make progress on our data sharding white paper. In the previous month, we brainstormed additional attack vectors. Although we gained a better understanding of the previous attack vectors, we did not find any new fundamental vulnerabilities. Thus, our data sharding proposal appears to be robust against a variety of malicious behaviours.
We also discussed further ways to model the growth of data tangles, in order to try to better estimate the potential throughput of the system. We started to work on creating simulators to model the creation of child tangles, and also to model interactions between parent and child tangles. These simulations will be used to verify our understanding and our assumptions about how the system would behave.
With many of the initial theoretical questions resolved, we now only need to finish the white paper. The largest impediment right now is finding the man hours to devote to the writing, since all of our researchers are also involved in finishing Nectar.
Thanks for reading! If you have any questions or would just like to say hi, you can find our Research team members in the #tanglemath channel on our Discord. You are also welcome to follow and participate in our technical discussions on our public forum: IOTA.cafe.