New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there ANYWAY, to make this FAST, FASTER? And to use GPU, Pytorch, Batchj size etc? #202
Comments
I second this question. I have a video 30 seconds in length and 4K in resolution and it took 5 hours. I have a decent computer so im not sure what else can be done to optimize it. I also should add the resulting video was in 4K but the subject in the video was pixelated and not actually in high quality. |
throw the code in chatgpt - ask it. |
Unfortunately ChatGPT is useless here to be able to understand what exactly is going on, and no amount of dumping the code into ChatGPT works either |
start with html https://chat.openai.com/share/ffc38f7d-38d9-42f8-acb8-e1dd9cc94b90 I then provide the inference.py code - stating - (I will provide code and ask question in next prompt)
asking it to optimize code yields a few pointers - UPDATE: just searched through github and found this |
I have a i9 and a nice 4080, its like 20 mins i think. But to get good output you have to edit the code to use the 1024 or 2048 model |
can we can you ONNX, TensorRT to improve time? |
Thanks.
I really need to make it faster please.
The text was updated successfully, but these errors were encountered: