Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full rework of the BlockFetch logic for bulk sync mode #1179

Open
wants to merge 26 commits into
base: main
Choose a base branch
from

Conversation

Niols
Copy link
Contributor

@Niols Niols commented Jul 3, 2024

Integrates a new implementation of the BulkSync mode, where blocks are downloaded from alternative peers as soon as the node has no more blocks to validate while there are longstanding requests in flight.

This PR depends on the new implementation of the BulkSync mode (IntersectMBO/ouroboros-network#4919). cabal.project is made to point to a back-port of the BulkSync implementation on ouroboros-network-0.16.1.1.

CSJ Changes

CSJ is involved because the new BulkSync mode requires to change the dynamo if it is also serving blocks, and it is not sending them promptly enough. The dynamo choice has an influence in the blocks that are chosen to be downloaded by BlockFetch.

For this sake, b93c379 gives the ability to order the ChainSync clients, so the dynamo role can be rotated among them whenever BlockFetch requests it.

b1c0bf8 provides the implementation of the rotation operation.

BlockFetch tests

c4bfa37 allows to specify in tests in which order to start the peers, which has an effect on what peer is chosen as initial dynamo.

c594c09 in turn adds a new BlockFetch test to show that syncing isn't slowed down by peers that don't send blocks.

Integration of BlockFetch changes

The collection of ChainSync client handles now needs to be passed between BlockFetch and ChainSync so dynamo rotations can be requested by BlockFetch.

The parameter bfcMaxConcurrencyBulkSync has been removed since blocks are not coordinated to be downloaded concurrently.

These changes are in 6926278.

ChainSel changes

Now BlockFetch requires the ability to detect if ChainSel has run out of blocks to validate. This motivates 73187ba, which implements a mechanism to measure if ChainSel is waiting for more blocks (starves), and determines for how long.

The above change is not sufficient to measure starvation. The queue to send blocks for validation used to allow only for one block to sit in the queue. This would interfere with the ability to measure starvation since BlockFetch would block waiting for the queue to become empty, and the queue would quickly become empty after taking just 1 block. For download delays to be amortized, a larger queue capacity was needed. This is the reason why a fix to IntersectMBO/ouroboros-network#2721 was ported in 0d3fc28.

Miscellaneous fixes

CSJ jump size adjustment

When syncing from mainnet, we discovered that CSJ wouldn't sync the blocks from the Byron era. This was because the jump size was set to the length of the genesis window of the Shelley era, which is much larger than Byron's. When the jump size is larger than the genesis window, the dynamo will block on the forecast horizon before offering a jump that allows the chain selection to advance. In this case, CSJ and chain selection will deadlock.

For this reason we set the default jump size to the size of Byron's genesis window in 028883a. This didn't show an impact on syncing time in our measures. Future work (as part of deploying Genesis) might involve allowing the jump size to vary between different eras.

GDD rate limit

GDD evaluation showed an overhead of 10% if run after every header arrives via ChainSync. Therefore, in b7fa122 we limited how often it could run, so multiple header arrivals could be handled by a single GDD evaluation.

Candidate fragment comparison in the ChainSync client

We stumbled upon a test case where the candidate fragments of the dynamo and an objector were no longer than the current selection (both peers were adversarial). This was problematic because BlockFetch would refuse to download blocks from these candidates, and ChainSync in turn would wait for the selection to advance in order to download more headers.

The fix in e27a73c is to have the ChainSync client disconnect a peer which is about to block on the forecast horizon if its candidate isn't better than the selection.

Candidate fragment truncations

At the moment, it is possible for a candidate fragment to be truncated by CSJ when a jumper jumps to a point that is not younger than the tip of its current candidate fragment. We encountered tests where the jump point could be so old that it would fall behind the immutable tip, and GDD would ignore the peer when computing the Limit on Eagerness. This in turn would cause the selection to advance into potentially adversarial chains.

The fix in dc5f6f7 is to have GDD never drop candidates. When the candidate does not intersect the current selection, the LoE is not advanced. This is a situation guaranteed to be unblocked by the ChainSync client since it will either disconnect the peer or bring the candidate to intersect with the current selection.

@Niols Niols added the Genesis PRs related to Genesis testing and implementation label Jul 3, 2024
@amesgen amesgen changed the base branch from niols/blockfetch-leashing to main August 5, 2024 12:34
@facundominguez facundominguez force-pushed the blockfetch/milestone-1 branch 2 times, most recently from da90985 to b3434e8 Compare August 5, 2024 21:05
Comment on lines 600 to 606
-- REVIEW: What about all the threads that are waiting to write in the queue and
-- will write after the flush?!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, there could be short time after the queue is flushed, but before the ChainDB is closed (and hence nothing new can be added to the ChainDB). I think this already exists on main.

I raised this on the IOG Slack, but I don't think we need to do anything about it in this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case I'd say: let's add a note and remove the REVIEW item.

Copy link
Member

@dnadales dnadales left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM: please remove any pending REVIEW comments.

, gcLoEAndGDDConfig :: !(LoEAndGDDConfig LoEAndGDDParams)
} deriving stock (Eq, Generic, Show)

-- | Genesis configuration flags and low-level args, as parsed from config file or CLI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might help to add comments and examples of these flags and their effect on the node's behaviour. It will surely help whoever integrates this with the CLI/Node.

defaultCapacity = 100_000 -- number of tokens
defaultRate = 500 -- tokens per second leaking, 1/2ms
-- 3 * 2160 * 20 works in more recent ranges of slots, but causes syncing to
-- block in byron.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
-- block in byron.
-- block in Byron.

-- values carry no special meaning. Someone needs to think about what values
-- would make for interesting tests.
gtLoPBucketParams = LoPBucketParams { lbpCapacity = 50, lbpRate = 10 },
-- ^ REVIEW: Do we want to generate those randomly?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do not generate these randomly we should simply add a note that says that we could, but we should remove any pending REVIEWs from the code 🙏

ChainSyncClientHandleCollection peer m blk ->
peer ->
m ()
-- STM m (Maybe (peer, ChainSyncClientHandle m blk))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dangling comment.

-- Otherwise, the BlockFetch client would have to wait for
-- 'chainSelectionForFutureBlocks'.
--
-- Note: we call 'chainSelectionForFutureBlocks' in all branches instead of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capitalize NOTE?

Comment on lines 600 to 606
-- REVIEW: What about all the threads that are waiting to write in the queue and
-- will write after the flush?!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case I'd say: let's add a note and remove the REVIEW item.

Niols and others added 22 commits November 18, 2024 02:20
* Addition of ChainSyncClientHandleCollection, grace period, and starvation event in BlockFetch
* Plug `rotateDynamo` into `BlockFetchConsensusInterface`
* Removal of `bfcMaxConcurrencyBulkSync`
* Changes in blockfetch decision tracing
* Move Genesis-specific BlockFetch config to GenesisConfig
* Introduce GenesisConfigFlags for interaction with config files/CLI
* Add missing instances for Genesis configuration
* Mention that the objector also gets demoted
* Edit note on Interactions with the BlockFetch logic
* Expand the comments motivating DynamoInitState and ObjectorInitState

Co-authored-by: Nicolas “Niols” Jeannerod <[email protected]>
* Run more repetitions of LoE, LoP, CSJ, and gdd tests
* Print timestamps for node restarts
* Disable boring timeouts in the node restart test
* Wait sufficiently long at the end of tests
* Expect CandidateTooSparse in gdd tests
* Add a notice about untracked delays in the node restart test
* Set the GDD rate limit to 0 in the peer simulator
* Have the peer simulator use the default grace period for chainsel starvations
* Relax expectations of test blockFetch in the BulkSync case
* Allow to run the decision logic once after the last tick in the blockfetch leashing attack
* Shift point schedule times before giving the schedules to tests
* Accomodate for separate decision loop intervals for fetch modes
* Accomodate for timer added in blockFetchLogic
* Switch peer simulator to `FetchModeBulkSync`
* Allow parameterizing whether chainsel starvation is handled
* Add some wiggle room for duplicate headers in CSJ tests
* Disable chainsel starvation in CSJ test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Genesis PRs related to Genesis testing and implementation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants