-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More dedicated details to optimize speed #39
Labels
help wanted
Extra attention is needed
Comments
The for-loops in this function could be optimized to further speed-up post-processing time. panoptic-deeplab/segmentation/model/post_processing/instance_post_processing.py Lines 123 to 179 in 5b3dd8c
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, as I am testing, panopatic-deeplab can get a satisfying speed in terms of model forward time. However, when counts the postprocessing and visualize speed (this part can not be ignored since it should not be so slow), the speed is totally slow.
Wanna ask, if you guys have a more dedicated optimized plan / method, would be nice to optimize this model and so that more promising converting to other framework for deployment such as accelerate it with TensorRT and deploy more lightweighted model.
The text was updated successfully, but these errors were encountered: