Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpu memory not realesed #56

Open
royzhang12 opened this issue Oct 11, 2022 · 4 comments
Open

gpu memory not realesed #56

royzhang12 opened this issue Oct 11, 2022 · 4 comments

Comments

@royzhang12
Copy link

Hi @wenbowen123,
thank you for your nice work. I've been trying to run Bundletrack on a collections of video clips.
To automatically swith between each clips, I simply declare Bundler by
*Bundler bundler = nullptr; before the main loop of video
and construct it again after swith to to a new video clips by
delete bundler;
bundler = new Bundler(yml,&data_loader);
. The pose estimation result seem well. However, I find that the GPU memory is not realesed after swith to a new video clips and the usage keeps increaseting. Untill 10G GPU memory was used, the programe will be automatically killed by no reason.

Do you have any idea of how to resolve this issue?

@wenbowen123
Copy link
Owner

Hi, there might be a memory leak. I'll try to find a time to investigate further after a deadline recently.
If you are running for some offline data, perhaps you can run each video sequence by bash script? Each time modify the data_dir in the config file to run.

@royzhang12
Copy link
Author

Hi, there might be a memory leak. I'll try to find a time to investigate further after a deadline recently. If you are running for some offline data, perhaps you can run each video sequence by bash script? Each time modify the data_dir in the config file to run.

Hi, @wenbowen123. Thank you for the swift reply. I am doing test on both online and offline data. The suggestion is a great solution for offline data, thanks. However, for oneline test, it does not work.
I tried to restart the process autoamtically online, which consumes much time and leads to really slow inference speed. Hope you can help to find the reason cause the memory leak after your deadline. Many thanks.

@ChrisSun99
Copy link

I also have this problem right now. I tried the workaround to split my dataset into smaller segments, but it would be great if there's a solid solution.

@wenbowen123
Copy link
Owner

wenbowen123 commented Jul 31, 2023

FYI, we recently released a follow up version BundleSDF, which should not have such issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants