You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I previously shared a setup where I read an image buffer from a Redis server, converted it into a GStreamer buffer, and then created a DeepStream pipeline through an app-source. The buffer is ultimately pushed using this C++ code:
Now, in this setup where I'm using a player(interpipe-source) for DeepStream inference, I'm concerned about the scenario where the inference speed might be slower than the image push speed. In such a case, would image buffers continuously accumulate in a certain queue?
If yes, is there an existing structure that allows for the dropping of these accumulating buffers to prevent overload?
I'm currently using a fake-sink.
Thank you
The text was updated successfully, but these errors were encountered:
It seems to be true that things are accumulating in the queue.
In my case, maybe I set max-queue-size is 1, and my goal would be to inference the latest buffer at any time.
And it would be nice if there was an alarm if system was to be pushed back.
I previously shared a setup where I read an image buffer from a Redis server, converted it into a GStreamer buffer, and then created a DeepStream pipeline through an app-source. The buffer is ultimately pushed using this C++ code:
DslReturnType retval = dsl_source_app_buffer_push(L"app-source", buffer);
Now, in this setup where I'm using a player(interpipe-source) for DeepStream inference, I'm concerned about the scenario where the inference speed might be slower than the image push speed. In such a case, would image buffers continuously accumulate in a certain queue?
If yes, is there an existing structure that allows for the dropping of these accumulating buffers to prevent overload?
I'm currently using a fake-sink.
Thank you
The text was updated successfully, but these errors were encountered: