Skip to content

ForeverPs/Robust-Classification

Repository files navigation

Gradient Concealment: Free Lunch for Defending Adversarial Attacks

Rank 2


Paper | PPT | Checkpoints | Homepage | Certificate

Official PyTorch Implementation

Sen Pei, Jiaxi Sun, Xin Zhang, Qing Li, Shuo Li
Institute of Automation, Chinese Academy of Sciences


Table of Contents
  1. About the proposed Gradient Concealment Module
  2. About the CVPR Robust Classification Challenge
  3. Reference
  4. Citation
  5. Acknowledgement

About the proposed Gradient Concealment Module

Installation

  • pytorch, no strict version constraint.
  • set GradientConcealment() in model/robust_layer.py as the top layer of your model in forward().

Highlights of Gradient Concealment Module

  • Parameter-free, training-free, plug-and-play.
  • Promising performance in both classification task and adversarial defense.
  • Superior generalization across different model architectures, including CNN-based models and attention-based models.

Some Attack Robustness Results on ImageNet

  • Download pre-trained models here:
  • Values below are AR (Attack Robustness) metric.
Model Method Top 1 Acc FGSM Linf=8/255 PGD L1=1600 PGD L2=8.0 PGD Linf=8/255 C&W L2=8.0
ResNet-50 Vanilla 77.89 31.77 0.01 0.01 0.00 0.21
ResNet-50 GCM 78.57 95.18 94.82 94.41 97.38 95.11
WideResNet-50 Vanilla 78.21 20.88 0.36 0.61 0.50 0.21
WideResNet-50 GCM 78.08 96.06 94.46 94.51 97.69 95.66
DenseNet-121 Vanilla 74.86 16.82 0.04 0.05 0.06 0.12
DenseNet-121 GCM 74.71 94.98 94.31 94.08 97.16 95.49
EfficientNet-B4 Vanilla 71.52 1.23 0.36 0.28 0.20 1.88
EfficientNet-B4 GCM 71.76 94.68 89.95 90.87 97.97 93.07
ViT-B/16 Vanilla 79.46 15.86 0.00 0.00 0.00 0.90
ViT-B/16 GCM 79.47 92.24 94.94 95.07 98.24 93.31
Swin-Transformer-S Vanilla 82.93 16.93 0.20 0.00 0.00 0.76
Swin-Transformer-S GCM 82.79 94.38 90.71 91.04 98.77 92.31

About the CVPR Robust Classification Challenge

Conclusion

  • Backbone does matter, ConvNext is better than SeResNet.
  • Randomization is efficient for defending adversarial attacks.
  • Data augmentation is vital for improving the classification performance, reducing overfitting.
  • Gradient concealment dramatically improves AR metric of classifiers in presence of perturbed images.

Datasets

  • train_phase1/images/ : 22987 images for training
  • train_phase1/label.txt : ground-truth file
  • track1_test1/ : 20000 images for testing
  • train_p2 : 127390 images within 100 categories

Data Augmentation Schemes

data_aug.py supports the following operations currently:

  • PepperSaltNoise
  • ColorPointNoise
  • GaussianNoise
  • Mosaic in black / gray / white / color
  • RGBShuffle / ColorJitter
  • Rotate
  • HorizontalFlip / VerticalFlip
  • RandomCut
  • MotionBlur / GaussianBlur / ConventionalBlur
  • Rain / Snow
  • Extend
  • BlockShuffle
  • LocalShuffle (for learning local spatial feature)
  • RandomPadding (for defense of adversarial attacks)

avatar

Image Pre-Processing

  • transforms.Resize(256)
  • transforms.RandomResizedCrop(224)
  • Data augmentation schemes

Adversarial Defense Schemes

  • Adversarial training using fast gradient sign method.
  • Resize and pad the input images for mitigating adversarial effects.
  • Gradient concealment module for hiding the vulnerable direction of classifier's gradient.

Architectures

  • ConvNext(tiny) + FC + FGSM regularization + GCM(Gradient Concealment Module) + Randomization
  • ConvNext(tiny) + ML Decoder + FGSM regularization + GCM(Gradient Concealment Module) + Randomization

Training Details

  • Training from scratch in a two-stage manner, we provide our checkpoints.
  • The first stage: train ConvNext without ResizedPaddingLayer.
  • The second stage: finetune ConvNext with ResizedPaddingLayer.

DDP Training

  • python -m torch.distributed.launch --nproc_per_node=5 train.py --batch_size 64 --n_gpus=5
  • If you have more GPUs, you can modify the nproc_per_node and n_gpus to utilize them.

Reference


Citation

@article{Pei2022Grad,
  title={Gradient Concealment: Free Lunch for Defending Adversarial Attacks},
  author={Sen Pei, Jiaxi Sun, Xiaopeng Zhang and Gaofeng Meng},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  year={2022}
}

Acknowledgement

  • This work is done during the author's internship at ByteDance.
  • The author would like to thank Xiaojie Jin and Ye Yuan for their support in both provided computing resources and vital discussions.

Releases

No releases published

Packages

No packages published

Languages