Separating Computation from Consensus #1503
Replies: 2 comments
-
This is a very interesting idea and I think it has a lot of other use cases. For instance, a hypervm based subnet could have two sets of nodes - validator nodes for the basic consensus mechanism and computation nodes for handling other interesting tasks such as for data transformation, training AI models, separating storage layer, and executing smart contract workflows. This is a very unique use case and I don't know to what extent these different types of nodes could be utilized by the subnets but yeah, interesting idea for sure. |
Beta Was this translation helpful? Give feedback.
-
If you wanted to break block processing up into different groups in this way, you could use the strategy pattern within the actions and decide what the For now, this would be a bit complicated to make into a first class citizen within the HyperSDK, so I don't think we'd likely implement something like this until it was solving a clear bottleneck or problem. We explored and implemented DSMR because it solved a very concrete problem (bottleneck on the outbound bandwidth of the block producer). |
Beta Was this translation helpful? Give feedback.
-
The separation of roles for different validators is not a new idea. Some will be familiar with the design paradigm of separating computation from consensus as described in Ekiden[1}, Oasis[2], and as I recently learned Flow[3].
Consensus Nodes determine the order by specifying all the inputs to the deterministic algorithm that computes the order. Computations Nodes do the heavy lifting running instructions of the smart contracts. Verifier nodes are then responsible for checking the outcomes, applying state differentials. There are significant benefits we achieve in assigning these roles. The first most obvious benefit is the ability to increase the block gas limit - now being able to run a heavier workload on our computation nodes which should be run on high performance hardware. This now opens participation up for consensus and validator nodes which can be more decentralized now that they are unburdened by performance outcomes of compute operations. Going further, storage nodes can be incorporated to handle the growth of state data. This last part could solve our fears of state growth into the Terabytes.
There is no doubt that higher throughput ambitions can be targeted by this sort of architecture. As these things go there are always tradeoffs - our blockchain becomes the weakness of our most vulnerable node in this equation. This is the reason compute nodes have often been run in Trusted Execution Environment's, with Flow being the exception.
I think the next push after DSMR could consider trying to evaluate the kind of impacts on throughput that can be had by separating roles for different nodes.
1 https://arxiv.org/pdf/1804.05141
2 https://assets.website-files.com/5f59478e350b91447863f593/628ba74a9aee37587419cf65_20200623%20The%20Oasis%20Blockchain%20Platform.pdf
3 https://arxiv.org/pdf/1909.05821
Beta Was this translation helpful? Give feedback.
All reactions