You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under some situations in the Parquet reader (particularly the case with tables containing many columns or deeply nested column) we burn a decent amount of time doing cudaMemset() operations on output buffers. A good amount of this overhead seems to stem from the fact that we're simply launching many tiny kernels. It might be useful to have a batched/multi memset kernel that takes a list of address/sizes/values as a single input and does all the work under a single kernel launch. Similar to the Cub multi-buffer memcpy or contiguous_split.
The text was updated successfully, but these errors were encountered:
Under some situations in the Parquet reader (particularly the case with tables containing many columns or deeply nested column) we burn a decent amount of time doing
cudaMemset()
operations on output buffers. A good amount of this overhead seems to stem from the fact that we're simply launching many tiny kernels. It might be useful to have a batched/multi memset kernel that takes a list of address/sizes/values as a single input and does all the work under a single kernel launch. Similar to the Cub multi-buffer memcpy orcontiguous_split
.The text was updated successfully, but these errors were encountered: