No. | Model Name | Title | Links | Pub. | Organization | Release Time |
---|---|---|---|---|---|---|
1 | VTP | Vision Transformer Pruning | paper | KDD 2021 workshop | Westlake University | 14 Aug 2021 |
2 | IA-RED2 | IA-RED2 : Interpretability-Aware Redundancy Reduction for Vision Transformers | paper code | NeurIPS 2021 | MIT | 23 Jun 2021 |
3 | DynamicViT | DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification | paper code | NeurIPS 2021 | Tsinghua University | |
4 | Evo-ViT | Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer | paper code | arXiv | Chinese Academy of Sciences | 6 Dec 2021 |
5 | - | Patch Slimming for Efficient Vision Transformers | paper | arXiv | Peking University | 5 Jun 2021 |
6 | - | Chasing Sparsity in Vision Transformers: An End-to-End Exploration | paper code | arXiv | University of Texas at Austin | 22 Oct 2021 |
7 | DeIT | Training data-efficient image transformers & distillation through attention | paper | ICML 2021 | 15 Jan 2021 | |
8 | - | Post-Training Quantization for Vision Transformer | paper | NeurIPS 2021 | Peking University | 27 Jun 2021 |
9 | - | Multi-Dimensional Model Compression of Vision Transformer | paper | arXiv | Princeton University | 31 Dec 2021 |
10 | - | Patch Slimming for Efficient Vision Transformers | paper | arXiv | Peking University | 5 Jun 2021 |
11 | - | Chasing Sparsity in Vision Transformers: An End-to-End Exploration | paper code | NeurIPS 2021 | University of Texas at Austin | 22 Oct 2021 |