You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
examples/device/video_capture
(built from pico-examples for a Raspberry Pi Pico (probably works for other RP2040 targets as well))
What happened ?
Introduction
Thanks for the tinyusb project! I stumbled across it a few years ago but I came back here because of Sebastian's GB Interceptor ("Capture or stream Game Boy gameplay footage via USB without modifying the Game Boy.", https://github.com/Staacks/gbinterceptor) which is based on the tinyusb's UVC implementation. I'd also like to stream 2-bit gray scale generated images using MJPEG - 1-bit b/w would even be enough for me. I wasn't too successful stripping down Sebastian's code to the bare minimum of MJPEG encoding and UVC streaming (it basically does work, but the target seems to hang up after an undeterministic time), so I went for the tinyusb video_capture example.
My first goal is to regularly toggle between 2 pre-coded JPEG images with a different frame size than 128x96 from the example. But first, I want to have a working example - this is where I currently fail.
Building an running the example
Building and running the example without any changes works like a charm. However, when I stream the video to VLC, it looks like the following:
My understanding of the example
For me it looks like the example could be controlled by two additional defines: CFG_EXAMPLE_VIDEO_READONLY and CFG_EXAMPLE_VIDEO_DISABLE_MJPG -- in the default build none of these defines ... is defined.
In my understanding if CFG_EXAMPLE_VIDEO_DISABLE_MJPG is defined, frame_buffer in images.h will be initialized with pre-defined YCbCr values in a "2D" array -- that could be overwritten during runtime.
If it is not defined, several pre-coded JPEG images are stored in color_bar_*_jpg[].
If CFG_EXAMPLE_VIDEO_READONLY is defined, the value from frame_buffer or frames are transferred using a call to tud_video_n_frame_xferdirectly. When it is not defined, fill_color_bar is called and "dynamic content" copied to the buffer, before it is transferred.
My first guess was that the Raspberry Pi Pico is not an appropriate target to stream video data - as it may not be running fast enough to generate video at a frame rate of 10 (FRAME_RATE 10).
Only when I reduce FRAME_RATE down to 2, I can get a complete, non-distorted image - however, also no "animation":
With my assumption of the target not being fast enough, I set to the define CFG_EXAMPLE_VIDEO_READONLY. My idea was that the runtime of fill_color_bar might exhaust the processor and only reading and copying data might make everything work (frame rate back to 10):
This looks broken. Reducing frame rate to 1 didn't show any effects.
Setting both #define CFG_EXAMPLE_VIDEO_DISABLE_MJPG and #define CFG_EXAMPLE_VIDEO_READONLY at frame rate 10 got me this again:
From then on keeping only one definition, didn't change anything:
This is more or less expected as inside the video_task() function, an #if/#else block for CFG_EXAMPLE_VIDEO_DISABLE_MJPG is always wrapped in another #ifdef CFG_EXAMPLE_VIDEO_READONLY block.
JPEG-encoding may also take much processing power, if you do it during runtime, but in the example I only see pre-encoded files. (Sebastian did some nice tricks there for the 2-bit grayscale JPEG encoding, loading processing of to special Pi hardware and DMA chains.)
Questions
How would the frames look like if everything was working fine - should an "animation" be visible? (The colored bars shifting positions?)
Do you also think that I see this effect because of the limited computational capabilities of the Pi Pico? Or USB speed limitations? Other RP2040-based projects seem to have a running UVC stack (https://github.com/ArduCAM/Arducam_Mega/ ?).
Can someone please explain the structure of the video_task() function pointing out why tud_video_n_frame_xfer() may be called twice per iteration? Is this to avoid overrun/underrun of frame transfers? I have seen this being copied by others (https://github.com/search?q=tud_video_n_frame_xfer&type=code):
void video_task(void)
{
static unsigned start_ms = 0;
static unsigned already_sent = 0;
if (!tud_video_n_streaming(0, 0)) {
already_sent = 0;
frame_num = 0;
return;
}
if (!already_sent) {
already_sent = 1;
start_ms = board_millis();
fill_frame_buffer(frame_buffer, frame_num);
tud_video_n_frame_xfer(0, 0, ...);
}
unsigned cur = board_millis();
if (cur - start_ms < interval_ms) return; // not enough time
if (tx_busy) return;
start_ms += interval_ms;
fill_frame_buffer(frame_buffer, frame_num);
tud_video_n_frame_xfer(0, 0, ...);
}
Do you have an idea how to further debug this? Like toggling GPIO pins at specific locations in the code and measuring intervals with a logic analyze to verify some observations and thoughts?
Sorry if this does not fulfil requirements to be a bug report but is more like a "support request". If there's a better place to put this, please let me know.
Operating System
MacOS
Board
Raspberry Pi Pico (RP2040)
Firmware
examples/device/video_capture
(built from pico-examples for a Raspberry Pi Pico (probably works for other RP2040 targets as well))
What happened ?
Introduction
Thanks for the tinyusb project! I stumbled across it a few years ago but I came back here because of Sebastian's GB Interceptor ("Capture or stream Game Boy gameplay footage via USB without modifying the Game Boy.", https://github.com/Staacks/gbinterceptor) which is based on the tinyusb's UVC implementation. I'd also like to stream 2-bit gray scale generated images using MJPEG - 1-bit b/w would even be enough for me. I wasn't too successful stripping down Sebastian's code to the bare minimum of MJPEG encoding and UVC streaming (it basically does work, but the target seems to hang up after an undeterministic time), so I went for the tinyusb video_capture example.
My first goal is to regularly toggle between 2 pre-coded JPEG images with a different frame size than 128x96 from the example. But first, I want to have a working example - this is where I currently fail.
Building an running the example
Building and running the example without any changes works like a charm. However, when I stream the video to VLC, it looks like the following:
My understanding of the example
For me it looks like the example could be controlled by two additional
define
s:CFG_EXAMPLE_VIDEO_READONLY
andCFG_EXAMPLE_VIDEO_DISABLE_MJPG
-- in the default build none of these defines ... is defined.In my understanding if
CFG_EXAMPLE_VIDEO_DISABLE_MJPG
is defined, frame_buffer in images.h will be initialized with pre-defined YCbCr values in a "2D" array -- that could be overwritten during runtime.If it is not defined, several pre-coded JPEG images are stored in
color_bar_*_jpg[]
.If
CFG_EXAMPLE_VIDEO_READONLY
is defined, the value from frame_buffer or frames are transferred using a call totud_video_n_frame_xfer
directly. When it is not defined,fill_color_bar
is called and "dynamic content" copied to the buffer, before it is transferred.My first guess was that the Raspberry Pi Pico is not an appropriate target to stream video data - as it may not be running fast enough to generate video at a frame rate of 10 (
FRAME_RATE 10
).Only when I reduce
FRAME_RATE
down to 2, I can get a complete, non-distorted image - however, also no "animation":With my assumption of the target not being fast enough, I set to the define
CFG_EXAMPLE_VIDEO_READONLY
. My idea was that the runtime offill_color_bar
might exhaust the processor and only reading and copying data might make everything work (frame rate back to 10):This looks broken. Reducing frame rate to 1 didn't show any effects.
Setting both
#define CFG_EXAMPLE_VIDEO_DISABLE_MJPG
and#define CFG_EXAMPLE_VIDEO_READONLY
at frame rate 10 got me this again:From then on keeping only one definition, didn't change anything:
This is more or less expected as inside the
video_task()
function, an#if/#else
block forCFG_EXAMPLE_VIDEO_DISABLE_MJPG
is always wrapped in another#ifdef CFG_EXAMPLE_VIDEO_READONLY
block.JPEG-encoding may also take much processing power, if you do it during runtime, but in the example I only see pre-encoded files. (Sebastian did some nice tricks there for the 2-bit grayscale JPEG encoding, loading processing of to special Pi hardware and DMA chains.)
Questions
How would the frames look like if everything was working fine - should an "animation" be visible? (The colored bars shifting positions?)
Do you also think that I see this effect because of the limited computational capabilities of the Pi Pico? Or USB speed limitations? Other RP2040-based projects seem to have a running UVC stack (https://github.com/ArduCAM/Arducam_Mega/ ?).
Can someone please explain the structure of the
video_task()
function pointing out whytud_video_n_frame_xfer()
may be called twice per iteration? Is this to avoid overrun/underrun of frame transfers? I have seen this being copied by others (https://github.com/search?q=tud_video_n_frame_xfer&type=code):Sorry if this does not fulfil requirements to be a bug report but is more like a "support request". If there's a better place to put this, please let me know.
Thanks!
How to reproduce ?
Debug Log as txt file (LOG/CFG_TUSB_DEBUG=2)
N/A
Screenshots
No response
I have checked existing issues, dicussion and documentation
The text was updated successfully, but these errors were encountered: