-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU usage of the resource destructor #178
Comments
@varsill thanks for a detailed issue report and analysis. Deallocation taking lot of CPU is certainly odd. I'll check this as high priority as soon as I get sometime.
As of now no. Is it possible to share a sample code so I can reproduce this easily? I also like to see the context in which these operations are performed (process involvement etc) Also, a blind suggestion if you want an immediate workaround, try wrapping all the vips operations for each frame in separate process and see if it helps? Something like |
@akash-akya Thank you very much for a quick response!
Great, thanks in advance!
Indeed, this one seems to solve an issue with uneven work distribution among the normal schedulers. When I run
Sure, I will try to prepare some sample script that reproduces that behaviour |
Hello!
Recently I've come across a problem with high CPU usage when processing images with Vix v0.31.1 in Elixir 1.16.3 with OTP 26.
My scenario looks as follows - I am generating a bunch of videos out of sequences of JPEG images in realtime. That is why I need to load images from files and convert them to YUV color space (so that they can later be encoded).
To be more precise, for each video I run the following sequence of operations multiple times per second (corresponding to the video's framerate):
Though this task indeed is quite computationally heavy, it's looks to me as if it was using more resources than I expect.
Apart from that I am unable to scale the number of videos beyond 5/6, despite there are still some CPU resources left on my computer.
I did some profiling and noticed that normal scheduler 1 is pushed to its boundaries, with its utilization reaching 100%, in contrast to other normal schedulers, which are almost unused. Below there is a screenshot from the Erlang's observer:
I thought it might have something to do with the fact the the dirty NIFs resources destructor is always run on the normal scheduler 1 (https://github.com/erlang/otp/blob/51548614b588c667a4230c17e2b187474dd70d63/erts/emulator/beam/erl_nif.c#L2841)
so I decided to see what operations are executed on that scheduler. I used
sample
to do thebeam.smp
process profiling and it turns out, that the majority of time spent by the normal scheduler 1 thread is indeed spent onrun_resource_dtor
.Below there is part of the output of my profiling results:
What is interesting, I see that the part starting with
g_signal_emit
is repeating itself several thousands of times, apparently decreasing some reference counter.I've tried reducing the Vips concurrency and after setting it to
1
, the amount of CPU used byrun_resource_dtor
is smaller, but still significant (around 50% of time obtained with Vips concurrency set to 8).I've several questions concerning my issue:
g_signal_emit
cycle" is run that many times?The text was updated successfully, but these errors were encountered: