-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
馃悰 [Bug] Unable to freeze tensor of type Int64/Float64 into constant layer #2848
Comments
When looking at your reproducer, I noticed that you had |
@narendasan Hi锛宼hanks so much for your reply, If I enable the |
There may be int64 types in your code (including things like index) which require the use of that setting. |
Unable to freeze tensor of type Int64/Float64 into constant layer, try to compile model with truncate_long_and_double enabled
When I try to test the Transformer Attention layer with tensorRT, I get the error above. I do checked both the sample and input tensor and the inputs for trt.compile, there are no double tensor.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Code run correctly
Environment
conda
,pip
,libtorch
, source): pipAdditional context
tensor_rt_attn.log
The text was updated successfully, but these errors were encountered: