Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two Fast To Block - Decreasing block times on parachains #6495

Open
bkchr opened this issue Nov 15, 2024 · 4 comments
Open

Two Fast To Block - Decreasing block times on parachains #6495

bkchr opened this issue Nov 15, 2024 · 4 comments
Assignees
Labels
I6-meta A specific issue for grouping tasks or bugs of a specific category.

Comments

@bkchr
Copy link
Member

bkchr commented Nov 15, 2024

Block times are historically bound to relay chain and how fast it can enact candidates. With async backing this time is down to 6 seconds. So, parachains have the same block time as the relay chain. However, not all kind of applications are happy with waiting 6s to have some confirmation of inclusion and access to the post state.

Parachains can actually run faster than 6 seconds. The idea there is to put multiple parachain blocks into one PoV. The relay chain in the end will not see that the parachain is running faster, because it is not able to look into the PoVs. However, this is also not required. One of the major downsides of putting multiple blocks into one PoV is that the parachain is still bound to the resource limits per PoV. This means that with 500ms block time, the available resources need to be divided by 12 to get the available resources per parachain block. With the ideas around tx streaming etc it will be possible to use more resources per parachain block. This approach also doesn't require any changes to any Polkadot protocols.

The implementation can be roughly split into the following tasks:

  • Support putting multiple blocks into one PoV. This will require some internal changes in Cumulus to make the ParachainBlockData generic over the number of blocks.
  • The slot based collator collation task needs to be rewritten to collecting multiple blocks and put them into a PoV when "the time" has come.
  • Runtime upgrades and the first block after the runtime upgrade will require special handling. They will probably take more resources than what one block has available. Some sort of digest that announces to the nodes that the block is allowed to take more resources will be required.
@bkchr bkchr added the I6-meta A specific issue for grouping tasks or bugs of a specific category. label Nov 15, 2024
@bkchr bkchr self-assigned this Nov 15, 2024
@bkchr
Copy link
Member Author

bkchr commented Nov 15, 2024

  • Support putting multiple blocks into one PoV. This will require some internal changes in Cumulus to make the ParachainBlockData generic over the number of blocks.

Will be solved by: #6137

This leaves an optimization around the proofs open. When putting multiple blocks into one PoV, the blocks coming after the first block would not need to include the state data in the proof that was written by the blocks before. However, this would require a lot of changes in Substrate, especially as storage reclaim would need to be aware of this data while building the block. It is not impossible, but currently being ignored.

@bkchr
Copy link
Member Author

bkchr commented Nov 15, 2024

  • The slot based collator collation task needs to be rewritten to collecting multiple blocks and put them into a PoV when "the time" has come.

The first part of collecting the proofs of all imported blocks is done by the following pr: #6481

@bkchr bkchr changed the title Two Fast Two Block - Decreasing block times on parachains Two Fast To Block - Decreasing block times on parachains Nov 17, 2024
@skunert
Copy link
Contributor

skunert commented Nov 27, 2024

The slot based collator collation task needs to be rewritten to collecting multiple blocks and put them into a PoV when "the time" has come.

Thinking about when "the time has come".

So basically the time to submit the PoV has come when:

  1. All the recently gathered blocks together come close to the PoV validation time limit we have on the relay chain
  2. All the recently gathered blocks together reach the PoV size limit
  3. Some time threshold has been reached and we don't want to bother waiting for more blocks

Just to mention it: This new import proof gathering and PoV "merging" also means that for the first time collators will submit blocks as part of the PoV package that have been authored by other collators.

@burdges
Copy link

burdges commented Nov 27, 2024

Ideally we'd abstract the proof aka PoV generation which parachain/service teams could piggy back upon:

A block itself creates "debts" which must be checked, maybe some ordered list hashes, with domain seperation used in computing the hashes of course. Block building outputs these "debts" as well as "notes" on how to satisfy them. The "notes" could be lookup positions in the state, or the full signatures for half-aggregation of schnorr signatures, or other things for batching other cryptography. These "notes" might already be somehow compressed, or have a seperated compression phase that runs in multiple threads, but we have an explicitly seperated "merge" phase which works on multiple blocks, and in general this is multi-threaded.

We might need to be working with some alternative storages like NOMT, and maybe batch verification, before we can really design this properly, so maybe this is not critical to the current design, but it seems worth mentioning here.

p.s. In many cases, you could execute the merges partially as blocks come in, but in some cases nobody currently uses like KZG then you've some big FFT to do at the end, but maybe those never become popular, so they could be excluded initially.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I6-meta A specific issue for grouping tasks or bugs of a specific category.
Projects
None yet
Development

No branches or pull requests

3 participants