You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a BDA is first initialized, it has no variable length metadata. That is correct. We assume that after setup, the BDA has variable length metadata, at least one, and that that is correct.
If saving of both areas succeeds, the MDAHeaders are correctly updated, i.e., the designated saved one is updated with correct new information, timestamp, size of metadata written, metadata crc.
If the size of the data is too big for the metadata region, saving will fail immediately, w/out making an attempt to write or to update the headers. This will also be true for a save time that is actually less recent than the timestamp of the data recorded. In each of these cases, an EngineError with value Invalid will be returned, which allows to distinguish between these conditions and I/O errors. This gets back to the question of how to select which blockdev's to write the metadata to in the first place. It makes no sense to try to write data that is too big, but we have no reason to believe that data is always going to increase in size. If we trim our pool right down, the data could actually decrease in size.
If saving the first or both fails, the corresponding MDAHeader will not be updated and an error result will be returned. This is a good, simple decision. It has one interesting consequence, though. The next time it is needed to write metadata to the disks, that region will be attempted again. If it was a non-transient failure, that particular blockdev will fail to have the metadata correctly written over and over again. We should consider the consequences of this.
What this leaves open is understanding the probability of getting wrong data when reading metadata during setup.
The text was updated successfully, but these errors were encountered:
mulkieran
changed the title
It would be a good idea to document what we do no about the correctness of our metadata handling
It would be a good idea to document what we do know about the correctness of our metadata handling
Jul 6, 2017
We actually ought to document how our metadata reading works in the first place. I think we can do that with a reasonable sized FSM encoding a regular language.
Right now, we know this:
What this leaves open is understanding the probability of getting wrong data when reading metadata during setup.
The text was updated successfully, but these errors were encountered: