Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov8-pose Infer slowly on Jetson #1

Open
wcycqjy opened this issue Aug 1, 2024 · 3 comments
Open

yolov8-pose Infer slowly on Jetson #1

wcycqjy opened this issue Aug 1, 2024 · 3 comments

Comments

@wcycqjy
Copy link

wcycqjy commented Aug 1, 2024

Thank you for your codes. I confront a problem that when I try to infer the yolov8n-pose and yolov8s-pose on Jetson, it takes me about 1200ms to finish one image, even slower than pytorch model in python which only takes around 40ms. This is so weird.

@lindsayshuo
Copy link
Owner

Before running. Need to run empty images several times. The engine needs to be warmed up before the speed can increase

@wcycqjy
Copy link
Author

wcycqjy commented Aug 1, 2024

Thank you. Now it's okay. Can you briefly explain why it needs warmup? I have run yolov5 before in tensorrt on jetson but didn't confront this issue.

@lindsayshuo
Copy link
Owner

YOLOv5 in tesnorrtx also needs to be preheated, and after testing, you will find that the first inference speed is different from the later ones.
The Jetson platform may use dynamic frequency scaling to manage power and thermal constraints. Running a few initial inferences can help the system decide on the optimal clock speeds for the CPU and GPU, improving the performance for subsequent inferences.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants