This repository includes Pytorch implementation for the following paper:
SalFBNet: Learning Pseudo-Saliency Distribution via Feedback Convolutional Networks, IMAVIS 2022.
Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, Masahiro Murakawa, Ryosuke Nakamura
(Paper)
(arXiv)
You can install the envs mannually by following commands:
git clone https://github.com/gqding/SalFBNet.git
conda create -n salfbnet python=3.8
conda activate salfbnet
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
pip install scikit-learn scipy tensorboard tqdm
pip install torchSummaryX
Alternativaly, you can install the envs from yml file by
conda env create -f environment.yml
-
Our released SalFBNet models
We released our pretrained SalFBNet models on Google Drive. The shared models initially trained on our Pseudo-Saliency Dataset, then fine tuned with SALICON and MIT1003 to be tested on MIT300 benchmark.
-
Our SalFBNet Pseudo Saliency Dataset
We released our PseudoSaliency dataset on this ABCI Datasets page;
We also show how to use our dataset for model training on this Usage page.
-
Our testing saliency results on public datasets
You can download our testing saliency resutls from this Google Drive.
After downloading the pretrained models, you can run the script by
sh run_test.sh
Alternativaly, you can modify the script for testing of different image folder and models (SalFBNet_Res18 or SalFBNet_Res18Fixed).
python main_test.py --model=pretrained_models/FBNet_Res18Fixed_best_model.pth \
--save_fold=./results_Res18Fixed/ \
--backbone=Res18Fixed \
--test_path=Datasets/PseudoSaliency/Images/ECSSD/images/
You can find results under the 'results_*' folder.
Dataset | #Image | #Training | #Val. | #Testing | Size | URL | Paper |
---|---|---|---|---|---|---|---|
SALICON | 20,000 | 10,000 | 5,000 | 5,000 | ~4GB | download link | paper |
MIT300 | 300 | - | - | 300 | ~44.4MB | download link | paper |
MIT1003 | 1003 | 900* | 103* | - | ~178.7MB | download link | paper |
PASCAL-S | 850 | - | - | 850 | ~108.3MB | download link | paper |
DUT-OMRON | 5,168 | - | - | 5,168 | ~151.8MB | download link | paper |
TORONTO | 120 | - | - | 120 | ~12.3MB | download link | paper |
Pseudo-Saliency (Ours) | 176,880 | 150,000 | 26,880 | - | ~24.2GB | download link | paper |
- *Training and Validation sets are randomly split by this work (see Training list and Val list).
- We released our Pseudo-Saliency dataset on this ABCI Datasets page.
Please check the leaderboard of SALICON2017 for more details.
Our model is shown with the user name "GQDing3".
Please check the leaderboard of MIT300 for more details.
Our SalFBNet model ranked in Second best with sAUC, CC, and SIM metrics (Screenshot from December 10, 2021).
We use the metric implementation from MIT Saliency Benchmark for performance evaluation.
Please cite the following papers if you use our data or codes in your research.
@article{ding2022salfbnet,
title={SalFBNet: Learning pseudo-saliency distribution via feedback convolutional networks},
author={Ding, Guanqun and {\.I}mamo{\u{g}}lu, Nevrez and Caglayan, Ali and Murakawa, Masahiro and Nakamura, Ryosuke},
journal={Image and Vision Computing},
pages={104395},
year={2022},
publisher={Elsevier}
}
@inproceedings{ding2021fbnet,
title={FBNet: FeedBack-Recursive CNN for Saliency Detection},
author={Ding, Guanqun and {\.I}mamo{\u{g}}lu, Nevrez and Caglayan, Ali and Murakawa, Masahiro and Nakamura, Ryosuke},
booktitle={2021 17th International Conference on Machine Vision and Applications (MVA)},
pages={1--5},
year={2021},
organization={IEEE}
}