-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Two Fast To Block - Decreasing block times on parachains #6495
Comments
Will be solved by: #6137 This leaves an optimization around the proofs open. When putting multiple blocks into one |
The first part of collecting the proofs of all imported blocks is done by the following pr: #6481 |
Thinking about when "the time has come". So basically the time to submit the PoV has come when:
Just to mention it: This new import proof gathering and PoV "merging" also means that for the first time collators will submit blocks as part of the PoV package that have been authored by other collators. |
Ideally we'd abstract the proof aka PoV generation which parachain/service teams could piggy back upon: A block itself creates "debts" which must be checked, maybe some ordered list hashes, with domain seperation used in computing the hashes of course. Block building outputs these "debts" as well as "notes" on how to satisfy them. The "notes" could be lookup positions in the state, or the full signatures for half-aggregation of schnorr signatures, or other things for batching other cryptography. These "notes" might already be somehow compressed, or have a seperated compression phase that runs in multiple threads, but we have an explicitly seperated "merge" phase which works on multiple blocks, and in general this is multi-threaded. We might need to be working with some alternative storages like NOMT, and maybe batch verification, before we can really design this properly, so maybe this is not critical to the current design, but it seems worth mentioning here. p.s. In many cases, you could execute the merges partially as blocks come in, but in some cases nobody currently uses like KZG then you've some big FFT to do at the end, but maybe those never become popular, so they could be excluded initially. |
Block times are historically bound to relay chain and how fast it can enact candidates. With async backing this time is down to 6 seconds. So, parachains have the same block time as the relay chain. However, not all kind of applications are happy with waiting 6s to have some confirmation of inclusion and access to the post state.
Parachains can actually run faster than 6 seconds. The idea there is to put multiple parachain blocks into one
PoV
. The relay chain in the end will not see that the parachain is running faster, because it is not able to look into thePoV
s. However, this is also not required. One of the major downsides of putting multiple blocks into onePoV
is that the parachain is still bound to the resource limits perPoV
. This means that with 500ms block time, the available resources need to be divided by 12 to get the available resources per parachain block. With the ideas around tx streaming etc it will be possible to use more resources per parachain block. This approach also doesn't require any changes to any Polkadot protocols.The implementation can be roughly split into the following tasks:
PoV
. This will require some internal changes in Cumulus to make theParachainBlockData
generic over the number of blocks.PoV
when "the time" has come.The text was updated successfully, but these errors were encountered: