-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Table Scan Delete File Handling: Positional and Equality Delete Support #652
base: main
Are you sure you want to change the base?
Conversation
0a64237
to
28021a4
Compare
@Xuanwo, @liurenjie1024: This is now ready to review, PTAL when you guys get chance. Look forward to your feedback 😁 |
2732a49
to
50f8a9e
Compare
cf8748a
to
7a8d297
Compare
Hi @liurenjie1024 and @Xuanwo - would either of you be able to review this at some point please? I know it's a bit large, sorry. Thanks :-) |
Hi, @sdd Thanks for your patience. In fact I already started reviewing it, and it's a little large, so it may take some time. |
…efactor delete file handling in scan
cc5dba4
to
df4e86a
Compare
Hey @liurenjie1024 - sorry to make changes whilst you are reviewing. I updated the design of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sdd for your patience, it's a really large pr and took me some time to review. I think generally you've understood how deletion files works, but I have some concerns about current code as it mixed a lot of things together. Deletion hanndling is quite chanllenging, I think the design from java implemention is quite reasonable:
I think maybe we need to have a design to split them into more small parts, what do you think?
// filename to a sorted list of row indices. | ||
// TODO: Ignoring the stored rows that are present in | ||
// positional deletes for now. I think they only used for statistics? | ||
Positional(HashMap<String, Vec<u64>>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Java implementation uses roaring bitmap to save space, we should also use it?
/// Manages async retrieval of all the delete files from FileIO that are | ||
/// applicable to the scan. Provides references to them for inclusion within FileScanTasks | ||
#[derive(Debug, Clone)] | ||
struct DeleteFileManager { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to move this part to a standalone module. And there exists a component DeleteFileIndex in java implementation, which I think is well designed. We don't need to implement all its details in one pr, but maintaining a similar data structure and api as DeleteFileIndex
and evolve slowly would be easier.
|
||
/// A task to scan part of file. | ||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] | ||
pub struct FileScanTaskDeleteFile { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may evolve as we add more feature, so I would suggest to make this a crate only data structure.
} | ||
} | ||
|
||
pub(crate) async fn parse_positional_delete_file( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this logic is quite similar to data file reader, I would expected it to reuse code of data file reader.
Ok(Deletes::Positional(result)) | ||
} | ||
|
||
pub(crate) async fn parse_equality_delete_file( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider schema evolution, we should use same logic as data file processing.
fn equality_delete_column_to_datum_vec( | ||
column: &ArrayRef, | ||
field: &NestedFieldRef, | ||
) -> Result<Vec<Option<Datum>>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a struct set?
// that are not applicable to the DataFile? | ||
|
||
DeleteFileManagerFuture { | ||
files: self.files.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect even if we ignore pruning techniques to remove unrelated deletion files. Please see this part for details.
Thanks so much for the review on this @liurenjie1024 - I've been ill for the past week or two so I've not had chance to work through your review in detail yet. I just wanted to let you know I've seen it and will pick it up when I've recovered. 👍 |
Hi, @sdd Sorry to hear that, take care of yourself! Don't worry about this, I'll be happy to discuss about this with you anytime when you're back. |
This PR adds support for handling of both positional and equality delete files within table scans.
The approach taken is to include a list of delete file paths in every
FileScanTask
. At the moment it is assumed that this list refers to delete files that may apply to the data file in the scan, rather than having been filtered to only contain delete files that definitely do apply to this data file. Further optimisation ofplan_files
is expected in the future to ensure that this list is pre-filtered before being included in the FileScanTask.The arrow reader now has the responsibility of retrieving the applicable delete files from FileIO as well as the data file.
Thanks to the Object Cache already being in place, if there are file scan tasks being handled in parallel in the same process that both attempt to load the same delete file, the Object Cache should ensure that only a single load and parse occurs.I've just realised that storing data files in the Object Cache is in my local fork - we only store manifests and manifest lists in the Object Cache in this repo! I can create an issue for adding this in a follow-up PR as it brings big performance improvements in many situations.The approach taken for handling each type of delete file is as follows:
Positional Deletes
Positional Delete support is implemented using the
RowSelection
functionality of the parquet arrow reader that we are already using. The list of applicable positional delete indices is turned into aRowSelection
. If the scan already hasenable_row_selection
enabled and there is a scan filter predicate, then theRowSelection
from this is intersected with the positional deleteRowSelection
to yield a single combinedRowSelection
.NB: This implementation relies on all data files for the table have been written with parquet page indexes in order for positional delete file application to succeed. This seems to be the case with parquet files written by Spark or Iceberg Java but not pyiceberg. In scenarios where positional delete files are encountered, but one or more of the data files that they apply to does not contain a page index, then the scan will return an error. This is at least an improvement on the status quo, where positional delete files cause a scan to fail in all circumstances, and for consumers who are not writing parquet files without page indexes this will not be an issue.
Equality Deletes
All rows from all applicable equality delete files are combined together to create a single
BoundPredicate
. If the scan also has a filter predicate, this is ANDed with the delete predicate to form a final combinedBoundPredicate
that is used as before to construct the arrowRowFilter
and is also used in the row group filtering.Update
I added Equality Delete handling into this PR as it only made a difference of about 350 lines.