Skip to content

[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition

License

Notifications You must be signed in to change notification settings

Event-AHU/VTF_PAR

Repository files navigation

VTF_PAR

Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition, Jun Zhu†, Jiandong Jin†, Zihan Yang, Xiaohao Wu, Xiao Wang († denotes equal contribution), CVPR-2023 Workshop@NFVLR, pp. 2625-2628. 2023. (New Frontiers in Visual Language Reasoning: Compositionality, Prompts and Causality),
[arXiv] [CVF] [Workshop]

News: An extension of VTFPAR can be found at: [VTFPAR++]

Spatio-Temporal Side Tuning Pre-trained Foundation Models for Video-based Pedestrian Attribute Recognition, Xiao Wang, Qian Zhu, Jiandong Jin, Jun Zhu, Futian Wang, Bo Jiang, Yaowei Wang, Yonghong Tian, arXiv pre-print, arXiv, 2024 [Paper]

Abstract

Existing pedestrian attribute recognition (PAR) algorithms are mainly developed based on a static image. However, the performance is not reliable for images with challenging factors, such as heavy occlusion, motion blur, etc. In this work, we propose to understand human attributes using video frames that can make full use of temporal information. Specifically, we formulate the video-based PAR as a vision-language fusion problem and adopt pre-trained big models CLIP to extract the feature embeddings of given video frames. To better utilize the semantic information, we take the attribute list as another input and transform the attribute words/phrase into the corresponding sentence via split, expand, and prompt. Then, the text encoder of CLIP is utilized for language embedding. The averaged visual tokens and text tokens are concatenated and fed into a fusion Transformer for multi-modal interactive learning. The enhanced tokens will be fed into a classification head for pedestrian attribute prediction. Extensive experiments on a large-scale video-based PAR dataset fully validated the effectiveness of our proposed framework.

Requirements

we use a single RTX3090 24G GPU for training and evaluation.

Basic Environment

Python 3.9.16
PyTorch 1.12.1
torch-vision 0.13.1

Installation

pip install -r requirements.txt

Datasets and Pre-trained Models

Download from BaiduYun:

  • MARS Dataset:
链接:https://pan.baidu.com/s/16Krv3AAlBhB9JPa1EKDbLw 提取码:zi08
  • Pre-trained Models (VTF-Pretrain.pth):
链接:https://pan.baidu.com/s/150t_zCW35YQHViKxsRIVzQ  提取码:glbd

Download from DropBox:

https://www.dropbox.com/scl/fo/h70nbcuj4gsmi4txhq1i0/h?rlkey=rwn1gbqbjpak6d7zhp46o3rnb&dl=0

Training and Testing

Use the following code to learn a model for MARS Dataset:

Training

python ./dataset/preprocess/mars.py
python train.py MARS

Testing

python eval.py MARS

📃 BibTex:

If you find this work useful for your research, please cite the following papers:

@inproceedings{zhu2023learning,
  title={Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition},
  author={Zhu, Jun and Jin, Jiandong and Yang, Zihan and Wu, Xiaohao and Wang, Xiao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={2625--2628},
  year={2023}
}

If you have any questions about this work, please submit an issue or contact me via Email: [email protected] or [email protected]. Thanks for your attention!

About

[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages