-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert model core dumped #32
Comments
pls make sure you have sufficient GPU memory, also if you are not sure |
Where can I find GPU memory requirements? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
hello:
convert model from onnx to engine:pfe can convert, but fpn can not convert.
[03/23/2023-15:04:59] [TRT] [W] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[03/23/2023-15:04:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[03/23/2023-15:04:59] [TRT] [W] Check verbose logs for the list of affected weights.
[03/23/2023-15:04:59] [TRT] [W] - 41 weights are affected by this issue: Detected subnormal FP16 values.
[03/23/2023-15:04:59] [TRT] [W] - 21 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
deserialize the engine . . .
[03/23/2023-15:04:59] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See
CUDA_MODULE_LOADING
in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-varscontext_rpn <tensorrt.tensorrt.IExecutionContext object at 0x7f77521fa458>
Segmentation fault (core dumped)
run C++ tensorRT: the fpn also core dumped
The text was updated successfully, but these errors were encountered: