Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert model core dumped #32

Open
pengcheng001 opened this issue Mar 23, 2023 · 2 comments
Open

convert model core dumped #32

pengcheng001 opened this issue Mar 23, 2023 · 2 comments

Comments

@pengcheng001
Copy link

hello:
convert model from onnx to engine:pfe can convert, but fpn can not convert.
[03/23/2023-15:04:59] [TRT] [W] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[03/23/2023-15:04:59] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[03/23/2023-15:04:59] [TRT] [W] Check verbose logs for the list of affected weights.
[03/23/2023-15:04:59] [TRT] [W] - 41 weights are affected by this issue: Detected subnormal FP16 values.
[03/23/2023-15:04:59] [TRT] [W] - 21 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
deserialize the engine . . .
[03/23/2023-15:04:59] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
context_rpn <tensorrt.tensorrt.IExecutionContext object at 0x7f77521fa458>
Segmentation fault (core dumped)

run C++ tensorRT: the fpn also core dumped

@HaohaoNJU
Copy link
Owner

pls make sure you have sufficient GPU memory, also if you are not sure
whether onnx file can be loaded by your TensorRT, you can run with
trtexec --onnx=rpn.onnx --fp16 --saveEngine=rpn.trt to check it out

@sebotech
Copy link

Where can I find GPU memory requirements?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants