Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation
Authors: Yan Jin, Mengke Li, Yang Lu*, Yiu-ming Cheung, Hanzi Wang
This is the repository of the CVPR 2023 paper: "Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation." We find that deep neural networks have different preferences towards the long-tailed distribution according to the depth. SHIKE is designed as a Mixture of Experts (MoE) method, which fuses features of different depths and enables transfer among experts, boosting the performance effectively in long-tailed visual recognition.
python 3.7.7 or above
torch 1.11.0 or above
Using the requirements file in this repo to create a virtual env. Reset the seed to 0 (line 49 in cifarTrain.py) and you may get the ideal result.
stay tuned for it~
Data augmentation in SHIKE mainly follows BalancedMetaSoftmax and PaCo.