You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your codes. I confront a problem that when I try to infer the yolov8n-pose and yolov8s-pose on Jetson, it takes me about 1200ms to finish one image, even slower than pytorch model in python which only takes around 40ms. This is so weird.
The text was updated successfully, but these errors were encountered:
YOLOv5 in tesnorrtx also needs to be preheated, and after testing, you will find that the first inference speed is different from the later ones.
The Jetson platform may use dynamic frequency scaling to manage power and thermal constraints. Running a few initial inferences can help the system decide on the optimal clock speeds for the CPU and GPU, improving the performance for subsequent inferences.
Thank you for your codes. I confront a problem that when I try to infer the yolov8n-pose and yolov8s-pose on Jetson, it takes me about 1200ms to finish one image, even slower than pytorch model in python which only takes around 40ms. This is so weird.
The text was updated successfully, but these errors were encountered: