ResNet-50 | ResNet-101 | DeepLabv2-ResNet-101
Pascal JPEGImages | Pascal SegmentationClass | Cityscapes leftImg8bit | Cityscapes gtFine
├── ./pretrained
├── resnet50.pth
├── resnet101.pth
└── deeplabv2_resnet101_coco_pretrained.pth
├── [Your Pascal Path]
├── JPEGImages
└── SegmentationClass
├── [Your Cityscapes Path]
├── leftImg8bit
└── gtFine
export semi_setting='pascal/1_8/split_0'
CUDA_VISIBLE_DEVICES=0,1 python -W ignore main.py \
--dataset pascal --data-root [Your Pascal Path] \
--batch-size 16 --backbone resnet50 --model deeplabv3plus \
--labeled-id-path dataset/splits/$semi_setting/labeled.txt \
--unlabeled-id-path dataset/splits/$semi_setting/unlabeled.txt \
--pseudo-mask-path outdir/pseudo_masks/$semi_setting \
--save-path outdir/models/$semi_setting
This script is for our ST framework. To run ST++, add --plus --reliable-id-path outdir/reliable_ids/$semi_setting
.
The DeepLabv2 MS COCO pre-trained model is borrowed and converted from AdvSemiSeg. The image partitions are borrowed from Context-Aware-Consistency and PseudoSeg. Part of the training hyper-parameters and network structures are adapted from PyTorch-Encoding. The strong data augmentations are borrowed from MoCo v2 and PseudoSeg.
- AdvSemiSeg: https://github.com/hfslyc/AdvSemiSeg.
- Context-Aware-Consistency: https://github.com/dvlab-research/Context-Aware-Consistency.
- PseudoSeg: https://github.com/googleinterns/wss.
- PyTorch-Encoding: https://github.com/zhanghang1989/PyTorch-Encoding.
- MoCo: https://github.com/facebookresearch/moco.
- OpenSelfSup: https://github.com/open-mmlab/OpenSelfSup.
Thanks a lot for their great works!