Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

video of real-time V2V-PoseNet hand tracking #64

Open
pythonsql21 opened this issue Nov 25, 2021 · 6 comments
Open

video of real-time V2V-PoseNet hand tracking #64

pythonsql21 opened this issue Nov 25, 2021 · 6 comments

Comments

@pythonsql21
Copy link

Hi, do you have a demo (video) of real-time V2V-PoseNet hand tracking? Thx.

@pythonsql21
Copy link
Author

Hi @mks0601 , thanks a lot for the demo video.
One more question, in the demo video what type of camera did you use for real-time tracking? RGB camera or RGB-D camera.

@mks0601
Copy link
Owner

mks0601 commented Nov 26, 2021

Those are results on ICVL, MSRA, and NYU datasets, which provide depth maps. My method takes a depth map, so one of RGB-D camera or depth camera is used.

@pythonsql21
Copy link
Author

Hi @mks0601, for the input images for training the V2V-PoseNet (as shown in Fig. 3 of your paper), what sofware did you use to compute 3D voxilized depth map (3D image)? do you have the 3D voxilized depth map (3D images) in your dataset?

@mks0601
Copy link
Owner

mks0601 commented Nov 29, 2021

function generate_cubic_input(cubic,depthimage,refPt,newSz,angle,trans)

This function generates a 3D voxelized depth map from a 2D depth map.

@pythonsql21
Copy link
Author

Hi @mks0601, what program did you use to visualize 3D voxel? Did you use matplotlib?thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants