Skip to content

mantasu/glasses-detector

Repository files navigation

Glasses Detector

Colab Docs PyPI Python CUDA DOI License

Banner

About

Package for processing images with different types of glasses and their parts. It provides a quick way to use the pre-trained models for 3 kinds of tasks, each divided into multiple categories, for instance, classification of sunglasses or segmentation of glasses frames.


Classification 👓 transparent 🕶️ opaque 🥽 anyshadows
Detection 🤓 worn 👓 standalone 👀 eye-area
Segmentation 😎 full 🖼️ frames 🦿 legs 🔍 lenses 👥 shadows

$\color{gray}{\textit{Note: }\text{refer to}}$ Glasses Detector Features $\color{gray}{\text{for visual examples.}}$

Installation

Important

Minimum version of Python 3.12 is required. Also, you may want to install Pytorch (select Nightly for compatibility) in advance to select specific configuration for your device and environment.

Pip Package

If you only need the library with pre-trained models, just install the pip package and see Quick Start for usage (also check Glasses Detector Installation for more details):

pip install glasses-detector

You can also install it from the source:

git clone https://github.com/mantasu/glasses-detector
cd glasses-detector && pip install .

Local Project

If you want to train your own models on the given datasets (or on some other datasets), just clone the project and install training requirements, then see Running section to see how to run training and testing.

git clone https://github.com/mantasu/glasses-detector
cd glasses-detector && pip install -r requirements.txt

You can create a virtual environment for your packages via venv, however, if you have conda, then you can simply use it to create a new environment, for example:

conda create -n glasses-detector python=3.12
conda activate glasses-detector 

To set-up the datasets, refer to Data section.

Quick Start

Command Line

You can run predictions via the command line. For example, classification of a single image and segmentation of images inside a directory can be performed by running:

glasses-detector -i path/to/img.jpg -t classification -d cuda -f int # Prints 1 or 0
glasses-detector -i path/to/img_dir -t segmentation -f mask -e .jpg  # Generates masks

Tip

You can also specify things like --output-path, --size, --batch-size etc. Check the Glasses Detector CLI and Command Line Examples for more details.

Python Script

You can import the package and its models via the python script for more flexibility. Here is an example of how to classify people wearing sunglasses:

from glasses_detector import GlassesClassifier

# Generates a CSV with each line "<img_name.jpg>,<True|False>"
classifier = GlassesClassifier(size="small", kind="sunglasses")
classifier.process_dir("path/to/dir", "path/to/preds.csv", format="bool")

And here is a more efficient way to process a dir for detection task (only single bbox per image is currently supported):

from glasses_detector import GlassesDetector

# Generates dir_preds with bboxes as .txt for each img
detector = GlassesDetector(kind="eyes", device="cuda")
detector.process_dir("path/to/dir", ext=".txt", batch_size=64)

Tip

Again, there are a lot more things that can be specified, for instance, output_size and pbar. It is also possible to directly output the results or save them in a variable. See Glasses Detector API and Python Script Examples for more details.

Demo

Feel free to play around with some demo image files. For example, after installing through pip, you can run:

git clone https://github.com/mantasu/glasses-detector && cd glasses-detector/data
glasses-detector -i demo -o demo_labels.csv --task classification:eyeglasses

You can also check out the demo notebook which can be also accessed via Google Colab.

Data

Before downloading the datasets, please install unrar package, for example if you're using Ubuntu (if you're using Windows, just install WinRAR):

sudo apt-get install unrar

Also, ensure the scripts are executable:

chmod +x scripts/*

Once you download all the datasets (or some that interest you), process them:

python scripts/preprocess.py --root data -f -d

Tip

You can also specify only certain tasks, e.g., --tasks classification segmentation would ignore detection datasets. It is also possible to change image size and val/test split fractions: use --help to see all the available CLI options.

After processing all the datasets, your data directory should have the following structure:

└── data                    # The data directory (root) under project
    ├── classification
    │   ├── anyglasses      # Datasets with any glasses as positives
    │   ├── eyeglasses      # Datasets with transparent glasses as positives
    │   ├── shadows         # Datasets with visible glasses frames shadows as positives
    │   └── sunglasses      # Datasets with semi-transparent/opaque glasses as positives 
    │
    ├── detection
    │   ├── eyes            # Datasets with bounding boxes for eye area 
    │   ├── solo            # Datasets with bounding boxes for standalone glasses
    │   └── worn            # Datasets with bounding boxes for worn glasses
    │
    └── segmentation
        ├── frames          # Datasets with masks for glasses frames
        ├── full            # Datasets with masks for full glasses (frames + lenses)
        ├── legs            # Datasets with masks for glasses legs (part of frames)
        ├── lenses          # Datasets with masks for glasses lenses
        ├── shadows         # Datasets with masks for eyeglasses frames cast shadows
        └── smart           # Datasets with masks for glasses frames and lenses if opaque

Almost every dataset will have train, val and test sub-directories. These splits for classification datasets are further divided to <category> and no_<category>, for detection - to images and annotations, and for segmentation - to images and masks sub-sub-directories. By default, all the images are 256x256.

Note

Instead of downloading the datasets manually one-by-one, here is a Kaggle Dataset that you could download which already contains everything.

Download Instructions

Download the following files and place them all inside the cloned project under directory data which will be your data --root (please note for some datasets you need to have created a free Kaggle account):

Classification datasets:

  1. From CMU Face Images download cmu+face+images.zip
  2. From Specs on Faces download original images.rar and metadata.rar
  3. From Sunglasses / No Sunglasses download archive.zip and rename to sunglasses-no-sunglasses.zip
  4. From Glasses and Coverings download archive.zip and rename to glasses-and-coverings.zip
  5. From Face Attributes Grouped download archive.zip and rename to face-attributes-grouped.zip
  6. From Face Attributes Extra download archive.zip and rename to face-attributes-extra.zip
  7. From Glasses No Glasses download archive.zip and rename to glasses-no-glasses.zip
  8. From Indian Facial Database download An Indian facial database highlighting the Spectacle.zip
  9. From Face Attribute 2 download FaceAttribute 2.v2i.multiclass.zip (choose v2 and Multi Label Classification format)
  10. From Glasses Shadows Synthetic download archive.zip and rename to glasses-shadows-synthetic.zip

Detection datasets:

  1. From AI Pass download AI-Pass.v6i.coco.zip (choose v6 and COCO format)
  2. From PEX5 download PEX5.v4i.coco.zip (choose v4 and COCO format)
  3. From Sunglasses Glasses Detect download sunglasses_glasses_detect.v1i.coco.zip (choose v1 and COCO format)
  4. From Glasses Detection download Glasses Detection.v2i.coco.zip (choose v2 and COCO format)
  5. From Glasses Image Dataset download glasses.v1-glasses_2022-04-01-8-12pm.coco.zip (choose v1 and COCO format)
  6. From EX07 download Ex07.v1i.coco.zip (choose v1 and COCO format)
  7. From No Eyeglass download no eyeglass.v3i.coco.zip (choose v3 and COCO format)
  8. From Kacamata-Membaca download Kacamata-Membaca.v1i.coco.zip (choose v1 and COCO format)
  9. From Only Glasses download onlyglasses.v1i.coco.zip (choose v1 and COCO format)

Segmentation datasets:

  1. From CelebA Mask HQ download CelebAMask-HQ.zip and from CelebA Annotations download annotations.zip
  2. From Glasses Segmentation Synthetic Dataset download archive.zip and rename to glasses-segmentation-synthetic.zip
  3. From Face Synthetics Glasses download archive.zip and rename to face-synthetics-glasses.zip
  4. From Eyeglass download eyeglass.v10i.coco-segmentation.zip (choose v10 and COCO Segmentation format)
  5. From Glasses Lenses Segmentation download glasses lenses segmentation.v7-sh-improvments-version.coco.zip (choose v7 and COCO format)
  6. From Glasses Lens download glasses lens.v6i.coco-segmentation.zip (choose v6 and COCO Segmentation format)
  7. From Glasses Segmentation Cropped Faces download glasses segmentation cropped faces.v2-segmentation_models_pytorch-s_1st_version.coco-segmentation.zip (choose v2 and COCO Segmentation format)
  8. From Spects Segmentation download Spects Segementation.v3i.coco-segmentation.zip (choose v3 and COCO Segmentation)
  9. From KINH download kinh.v1i.coco.zip (choose v1 and COCO format)
  10. From Capstone Mini 2 download CAPSTONE_MINI_2.v1i.coco-segmentation.zip (choose v1 and COCO Segmentation format)
  11. From Sunglasses Color Detection download Sunglasses Color detection roboflow.v2i.coco-segmentation.zip (choose v2 and COCO Segmentation format)
  12. From Sunglasses Color Detection 2 download Sunglasses Color detection 2.v3i.coco-segmentation.zip (choose v3 and COCO Segmentation format)
  13. From Glass Color download Glass-Color.v1i.coco-segmentation.zip (choose v1 and COCO Segmentation format)

The table below shows which datasets are used for which tasks and their categories. Feel free to pick only the ones that interest you.

Task Category Dataset IDs
Classification anyglasses 1, 3, 4, 5, 6, 7, 8, 9, 14, 15, 16
Classification eyeglasses 2, 4, 5, 6, 11, 12, 13, 14, 15
Classification sunglasses 1, 2, 3, 4, 5, 6, 11, 12, 13, 14, 15
Classification shadows 10
Detection eyes 14, 15, 16, 17
Detection solo 18, 19
Detection worn 11, 12, 13, 14, 15, 16
Segmentation frames 21, 23
Segmentation full 20, 27, 28
Segmentation legs 29, 30, 31
Segmentation lenses 23, 24, 25, 26, 30, 31, 32
Segmentation shadows 21
Segmentation smart 22

Running

To run custom training and testing, it is first advised to familiarize with how Pytorch Lightning works and briefly check its CLI documentation. In particular, take into account what arguments are accepted by the Trainer class and how to customize your own optimizer and scheduler via command line. Prerequisites:

  1. Clone the repository
  2. Install the requirements
  3. Download and preprocess the data

Training

You can run simple training as follows (which is the default):

python scripts/run.py fit --task classification:anyglasses --size medium 

You can customize things like batch-size, num-workers, as well as trainer and checkpoint arguments:

python scripts/run.py fit --batch-size 64 --trainer.max_epochs 300 --checkpoint.dirname ckpt

It is also possible to overwrite default optimizer and scheduler:

python scripts/run.py fit --optimizer Adam --optimizer.lr 1e-3 --lr_scheduler CosineAnnealingLR

Testing

To run testing, specify the trained model and the checkpoint to it:

python scripts/run.py test -t classification:anyglasses -s small --ckpt_path path/to/model.ckpt

Or you can also specify the pth file to pre-load the model with weights:

python scripts/run.py test -t classification:anyglasses -s small -w path/to/weights.pth

If you get UserWarning: No positive samples in targets, true positive value should be meaningless, increase the batch size.

Credits

For references and citation, please see Glasses Detector Credits.