We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A pod currently can specify:
Does GPU Milli refers to the % of memory used for that specific card-model ? If so, how does the following translate to memory required ?
name,cpu_milli,memory_mib,num_gpu,gpu_milli,gpu_spec iopenb-pod-0021,8000,30517,1,440,G2|P100|T4|V100M16|V100M32
For the case of P100(16GB) : 440/1000*16 = 7.04GB?
For the case of V100M32(32GB): 44/1000*32 = 14.08GB?
The text was updated successfully, but these errors were encountered:
I have the same question. Can anybody help answer it?
Sorry, something went wrong.
No branches or pull requests
A pod currently can specify:
Does GPU Milli refers to the % of memory used for that specific card-model ?
If so, how does the following translate to memory required ?
name,cpu_milli,memory_mib,num_gpu,gpu_milli,gpu_spec
iopenb-pod-0021,8000,30517,1,440,G2|P100|T4|V100M16|V100M32
For the case of P100(16GB) :
440/1000*16 = 7.04GB?
For the case of V100M32(32GB):
44/1000*32 = 14.08GB?
The text was updated successfully, but these errors were encountered: