Skip to content

Latest commit

 

History

History
 
 

image_classification

Task Adaptation for Image Classification

Installation

Example scripts support all models in PyTorch-Image-Models. You need to install timm to use PyTorch-Image-Models.

pip install timm

Dataset

Following datasets can be downloaded automatically:

You need to prepare following datasets manually if you want to use them:

and prepare them following Documentation for Retinopathy and Resisc45.

Supported Methods

Supported methods include:

Experiment and Results

Fine-tune the supervised pre-trained model

The shell files give the script to reproduce the supervised pretrained benchmarks with specified hyper-parameters. For example, if you want to use vanilla fine-tune on CUB200, use the following script

# Fine-tune ResNet50 on CUB200.
# Assume you have put the datasets under the path `data/cub200`, 
# or you are glad to download the datasets automatically from the Internet to this path
CUDA_VISIBLE_DEVICES=0 python baseline.py data/cub200 -d CUB200 -sr 100 --seed 0 --finetune --log logs/baseline/cub200_100

Fine-tune the unsupervised pre-trained model

Take MoCo as an example.

  1. Download MoCo pretrained checkpoints from https://github.com/facebookresearch/moco
  2. Convert the format of the MoCo checkpoints to the standard format of pytorch
mkdir checkpoints
python convert_moco_to_pretrained.py checkpoints/moco_v1_200ep_pretrain.pth.tar checkpoints/moco_v1_200ep_backbone.pth checkpoints/moco_v1_200ep_fc.pth
  1. Start training
CUDA_VISIBLE_DEVICES=0 python bi_tuning.py data/cub200 -d CUB200 -sr 100 --seed 0 --lr 0.1 -i 2000 --lr-decay-epochs 3 6 9 --epochs 12 \
  --log logs/moco_pretrain_bi_tuning/cub200_100 --pretrained checkpoints/moco_v1_200ep_backbone.pth

The shell files als give the script to reproduce the unsupervised pretrained benchmarks with specified hyper-parameters.

Citation

If you use these methods in your research, please consider citing.

@inproceedings{LWF,
  author    = {Zhizhong Li and
               Derek Hoiem},
  title     = {Learning without Forgetting},
  booktitle={ECCV},
  year      = {2016},
}

@inproceedings{L2SP,
  title={Explicit inductive bias for transfer learning with convolutional networks},
  author={Xuhong, LI and Grandvalet, Yves and Davoine, Franck},
  booktitle={ICML},
  year={2018},
}

@inproceedings{BSS,
  title={Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning},
  author={Chen, Xinyang and Wang, Sinan and Fu, Bo and Long, Mingsheng and Wang, Jianmin},
  booktitle={NeurIPS},
  year={2019}
}

@inproceedings{DELTA,
  title={Delta: Deep learning transfer using feature map with attention for convolutional networks},
  author={Li, Xingjian and Xiong, Haoyi and Wang, Hanchao and Rao, Yuxuan and Liu, Liping and Chen, Zeyu and Huan, Jun},
  booktitle={ICLR},
  year={2019}
}

@inproceedings{StocNorm,
  title={Stochastic Normalization},
  author={Kou, Zhi and You, Kaichao and Long, Mingsheng and Wang, Jianmin},
  booktitle={NeurIPS},
  year={2020}
}

@inproceedings{CoTuning,
  title={Co-Tuning for Transfer Learning},
  author={You, Kaichao and Kou, Zhi and Long, Mingsheng and Wang, Jianmin},
  booktitle={NeurIPS},
  year={2020}
}

@article{BiTuning,
  title={Bi-tuning of Pre-trained Representations},
  author={Zhong, Jincheng and Wang, Ximei and Kou, Zhi and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2011.06182},
  year={2020}
}