You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read through the docs and I saw the support for running GPU kernels written entirely in Codon/Python. I really like this feature and was wondering if there are any plans to extend it to more traditional graphic shaders/kernels? I would love to be able to have a single toolchain that was able to compile a 3D graphical application in a single programming language and have it compile down to a native binary.
Also is there any concept of GPU memory outside of simply creating GPU and CPU representations of data types used in kernels? In some cases (again mainly graphics) you have a lot of data that only needs to be touched by the GPU and as a result once you have sent that data to the GPU you can safely free it on the CPU side of things. Not a massive issue these days with the size of RAM you can get, but on more constrained systems you might want to optimise memory usage for such a case
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I read through the docs and I saw the support for running GPU kernels written entirely in Codon/Python. I really like this feature and was wondering if there are any plans to extend it to more traditional graphic shaders/kernels? I would love to be able to have a single toolchain that was able to compile a 3D graphical application in a single programming language and have it compile down to a native binary.
Also is there any concept of GPU memory outside of simply creating GPU and CPU representations of data types used in kernels? In some cases (again mainly graphics) you have a lot of data that only needs to be touched by the GPU and as a result once you have sent that data to the GPU you can safely free it on the CPU side of things. Not a massive issue these days with the size of RAM you can get, but on more constrained systems you might want to optimise memory usage for such a case
Beta Was this translation helpful? Give feedback.
All reactions