Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

validation: sync chainstate to disk after syncing to tip #15218

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

andrewtoth
Copy link
Contributor

@andrewtoth andrewtoth commented Jan 20, 2019

When finishing syncing the chainstate to tip, the chainstate is not persisted to disk until 24 hours after startup. This can cause an issue where the unpersisted chainstate must be resynced if bitcoind is not cleanly shut down. If using a large enough dbcache, it's possible the entire chainstate from genesis would have to be resynced.

This fixes the issue by persisting the chainstate to disk right after syncing to tip, but not clearing the utxo cache (using the Sync method introduced in #17487). This happens by scheduling a call to the new function SyncCoinsTipAfterChainSync every 30 seconds. This function checks that the node is out of IBD, and then checks if no new block has been added since the last call. Finally, it checks that there are no blocks currently being downloaded by peers. If all these conditions are met, then the chainstate is persisted and the function is no longer scheduled.

Mitigates #11600.

src/validation.cpp Outdated Show resolved Hide resolved
@laanwj
Copy link
Member

laanwj commented Jan 21, 2019

Concept ACK, but I think IsInitialBlockDownload is the wrong place to implement this, as it's a query function, having it suddenly spawn a thread that flushes is unexpected.

Would be better to implement it closer to the validation logic and database update logic itself.

@andrewtoth
Copy link
Contributor Author

@laanwj Good point. I refactored to move this behaviour to ActivateBestChain in an area where periodic flushes are already expected.

src/validation.cpp Outdated Show resolved Hide resolved
@laanwj
Copy link
Member

laanwj commented Jan 22, 2019

@laanwj Good point. I refactored to move this behaviour to ActivateBestChain in an area where periodic flushes are already expected.

Thanks, much better!

src/validation.cpp Outdated Show resolved Hide resolved
@sdaftuar
Copy link
Member

I'm not really a fan of this change -- the problem described in #11600 is from an unclean shutdown (ie system crash), where our recovery code could take a long time (but typically would be much faster than doing a -reindex to recover, which is how our code used to work).

This change doesn't really solve that problem, it just changes the window in which an unclean shutdown could occur (reducing it at most by 24 hours). But extra flushes, particularly during initial sync, aren't obviously a good idea, since they harm performance. (Note that we leave IBD before we've synced all the way to the tip, I think once we're within a day or two?)

Because we flush every day anyway, it's hard for me to say that this is really that much worse, performance-wise (after all we don't currently support a node configuration where the utxo is kept entirely cached). But I'm not sure this solves anything either, and a change like this would have to be reverted if, for instance, we wanted to make the cache actually more useful on startup (something I've thought we should do for a while). So I think I'm a -0 on this change.

@andrewtoth
Copy link
Contributor Author

andrewtoth commented Jan 23, 2019

@sdaftuar This change also greatly improves the common workflow of spinning up a high performance instance to sync, then immediately shutting it down and using a cheaper one. Currently, you have to enter it and do a clean shutdown instead of just terminating. Similarly, when syncing to an external drive, you can now just unplug the drive or turn off the machine when finished.

I would argue that moving the window to 0 hours directly after initial sync is an objective improvement. There is a lot of data that will be lost directly after, so why risk another 24 hours? After that, the most they will lose is 24 hours worth of rolling back, instead of 10 years. Also, this change does not do any extra flushes during initial sync, only after.

I can't speak to your last point about changing the way we use the cache, since I don't know what your ideas are.

@sdaftuar
Copy link
Member

Currently, you have to enter it and do a clean shutdown instead of just terminating.

@andrewtoth We already support this (better, I think) with the -stopatheight argument, no?

I don't really view data that is in memory as "at risk"; I view it as a massive performance optimization that will allow a node to process new blocks at the fastest possible speed while the data hasn't yet been flushed. I also don't feel very strongly about this for the reasons I gave above, so if others want this behavior then so be it.

@sipa
Copy link
Member

sipa commented Jan 23, 2019

@sdaftuar Maybe this is a bit of a different discussion, but there is another option; namely supporting flushing the dirty state to disk, but without wiping it from the cache. Based on our earlier benchmarking, we wouldn't want to do this purely for maximizing IBD performance, but it could be done at specific times to minimize losses in case of crashes (the once per day flush for example, and also this IBD-is-finshed one).

@sdaftuar
Copy link
Member

@sipa Agreed, I think that would make a lot more sense as a first pass optimization for the periodic flushes and would also work better for this purpose as well.

@gmaxwell
Copy link
Contributor

. Currently, you have to enter it and do a clean shutdown instead of just terminating.

Well with this, if you "just terminate" you're going to end up with a replay of several days blocks at start, which is still ugly, even if less bad via this.

Aside, actually if you actually shut off the computer any time during IBD you'll likely completely corrupt the state and need to reindex because we don't use fsync during IBD for performance reasons.

We really need to get background writing going, so that our writes are never more than (say) a week of blocktime behind... but that is a much bigger change, so I don't suggest "just do that instead", though it would make the change here completely unnecessary.

Might it be better to trigger the flush the first time it goes 30 seconds without connecting a block and there are no queued transfers, from the scheduler thread?

@andrewtoth
Copy link
Contributor Author

andrewtoth commented Jan 25, 2019

@andrewtoth We already support this (better, I think) with the -stopatheight argument, no?

@sdaftuar Ahh, I never considered using that for this purpose. Thanks!

@gmaxwell It might still be ugly to have a replay of a few days, but much better than making everything unusable for hours.

There are comments from several people in this PR about adding background writing and writing dirty state to disk without wiping the cache. This change wouldn't affect either of those improvements, and is an improvement by itself in the interim.

As for moving this to the scheduler thread, I think this is better since it happens in a place where periodic flushes are already expected Also, checking every 30 seconds for a new block wouldn't work if for instance the network cuts out for a few minutes.

@sipa
Copy link
Member

sipa commented Jan 25, 2019

@andrewtoth The problem is that right now, causing a flush when exiting IBD will (temporarily) kill your performance right before finishing the sync (because it leaves you with an empty cache). If instead it was a non-clearing flush, there would be no such downside.

@sdaftuar
Copy link
Member

My experiment in #15265 has changed my view on this a bit -- now I think that we might as well make a change like this for now, but should change the approach slightly to do something like @gmaxwell's proposal so that we don't trigger the flush before we are done syncing:

Might it be better to trigger the flush the first time it goes 30 seconds without connecting a block and there are no queued transfers, from the scheduler thread?

@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from f1be35e to 442db9d Compare February 10, 2019 20:29
@andrewtoth
Copy link
Contributor Author

andrewtoth commented Feb 10, 2019

@sdaftuar @gmaxwell I've updated this to check every 30 seconds on the scheduler thread if there has been an update to the active chain height. This only actually checks after IsInitialBlockDownload is false, which happens if latest block is within a day of the current time.

I'm not sure how to check if there are queued transfers. If this is not sufficient, some guidance on how to do that would be appreciated.

src/init.cpp Outdated Show resolved Hide resolved
@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from 79a9ed2 to 3abbfb0 Compare February 11, 2019 01:43
@DrahtBot DrahtBot removed the CI failed label Feb 3, 2024
@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from 5928711 to 5275d3c Compare February 22, 2024 20:15
@andrewtoth
Copy link
Contributor Author

@Sjors @maflcko @luke-jr I've rebased, added some logging as well as a functional test.

@andrewtoth andrewtoth changed the title validation: Flush state after initial sync validation: sync chainstate to disk after syncing to tip Mar 13, 2024
@andrewtoth andrewtoth force-pushed the flush-after-ibd branch 2 times, most recently from 9dd521b to 363f325 Compare March 13, 2024 19:52
luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Mar 14, 2024
luke-jr pushed a commit to bitcoinknots/bitcoin that referenced this pull request Mar 14, 2024
Copy link
Contributor

@mzumsande mzumsande left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

While this one-time sync after IBD should help in some situations, I'm not sure that it completely resolves #11600 (I encountered this PR while looking into possible improvements to ReplayBlocks())
After all, there are several other situations in which a crash / unclean shutdown could lead to extensive replays (e.g. during IBD) that this PR doesn't address.

src/init.cpp Outdated Show resolved Hide resolved
src/init.cpp Outdated Show resolved Hide resolved
@DrahtBot
Copy link
Contributor

DrahtBot commented Jun 3, 2024

🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.

Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.

Leave a comment here, if you need help tracking down a confusing failure.

Debug: https://github.com/bitcoin/bitcoin/runs/25710459287

@andrewtoth
Copy link
Contributor Author

@mzumsande @chrisguida thank you for your reviews and suggestions. I've addressed them and rebased.

@DrahtBot DrahtBot removed the CI failed label Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet