A Learning-Based Data Collection Tool For Human Segmentation
Human segmentation is a difficult machine learning task of identifying and extracting the human in a picture. Most of the time this is done by using a convolutional neural network. In order to achieve an accurate and robust model, large amounts of data with varying human poses need to be collected to train the model. Collecting and labeling train data by hand takes lots of time and resources. This project explores another option to use automtation to collect and label pre-existing data from internet videos.
The model that was focused on is the DTEN ME model used for Zoom meetings virtual background.
Openpose is used to filter the video for suitable frames, in particular single person full body frames. Mask R-CNN is the teacher model that generates training labels. To find which images perform poorly on ME model, a comparison is done between ME masks and Mask R-CNN masks. The result is a set of images and masks that can be used as training data.
A full report of the system design and implemenation details can be found in doc
Examples of train data saved. In each image bottom left is Mask R-CNN mask and bottom right is ME mask.
This project relies on Openpose and Mask R-CNN and all their dependencies. Instructions on how to set up each are found in there respective directories here.
Documentation on how to use scripts are located in doc.