You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
that contains cubin (like an object file) along with pointers to kernels in this cubin. User code usually contains more than one invocation of an algorithm with matching parameter types.
Describe the solution you'd like
We should amortize the JIT-ting cost by caching _CCCLDeviceReduceBuildResult based on parameters affecting C++ codegen. For instance, reduce_into = cudax.reduce_into(d_output, d_output, op, h_init) the following parameters affect codegen (examples in paranthesis are meant to indicate different cache entries):
compute capability of current GPU (cc field of _CCCLDeviceReduceBuildResult)
type of input sequence (container, counting iteretor, zip iterator, etc)
dtype of input sequence (int32, uint64, etc)
type of output sequence (container, counting iteretor, zip iterator, etc)
operator source code (different function bodies, maybe could be op.__code__.co_code?)
Since parameters described above match between incocations of cudax.reduce_into, the second call should not lead to invocation of extern "C" CCCL_C_API CUresult cccl_device_reduce_build.
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Yeah, in cuda.core we'll offer various caches in the Python level, see NVIDIA/cuda-python#176 (can't link the internal design doc here, but we know where to find it 🙂). Not sure if the CCCL C library can easily hook up with a Python-based cache, though.
Not sure if the CCCL C library can easily hook up with a Python-based cache, though.
@leofang we could consider caching on C++ end. I was thinking about caching on the Python side of cuda.parallel for now instead if this makes this any easier.
Just a note here that #3001 implements caching on the Python side (as an immediate improvement). Caching on the C++ side, and file-based caches would still be extremely valuable.
Is this a duplicate?
Area
cuda.parallel (Python)
Is your feature request related to a problem? Please describe.
The
cuda.parallel
module contains a time-consuming JIT compilation (build) step. For reduction, this step invokes the following C API:cccl/c/parallel/include/cccl/c/reduce.h
Line 33 in 70a2872
On Python end, this step returns the following structure:
cccl/python/cuda_parallel/cuda/parallel/experimental/__init__.py
Lines 177 to 184 in 70a2872
that contains cubin (like an object file) along with pointers to kernels in this cubin. User code usually contains more than one invocation of an algorithm with matching parameter types.
Describe the solution you'd like
We should amortize the JIT-ting cost by caching
_CCCLDeviceReduceBuildResult
based on parameters affecting C++ codegen. For instance,reduce_into = cudax.reduce_into(d_output, d_output, op, h_init)
the following parameters affect codegen (examples in paranthesis are meant to indicate different cache entries):cc
field of_CCCLDeviceReduceBuildResult
)op.__code__.co_code
?)Let's say we build two reductions as follows:
Since parameters described above match between incocations of
cudax.reduce_into
, the second call should not lead to invocation ofextern "C" CCCL_C_API CUresult cccl_device_reduce_build
.Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: