Paper link: Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting
- Overview
- Repository Structure
- Installation
- Running the Benchmark
- Example
- Running Statistics
- Available Models
Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting introduces a new benchmark named PrACo (Prompt-Aware Counting Benchmark) designed to evaluate the performance of different class-agnostic prompt-guided counting models in counting the correct objects. In the paper, we demonstrated how state-of-the-art models fail to correctly understand the provided textual prompt, defaulting to the most prevalent object class instead. This repository includes the necessary scripts and instructions to run the benchmark and evaluate the models described in the paper.
The repository is organized as follows:
-
benchmark/
: Scripts for evaluating the models on the PrACo dataset. -
models/
: Contains the model implementations for:CounTX
CLIP-Count
TFPOC
VLCounter
DAVE
ZSC
PseCo
-
main.py
: Main script to run the benchmark for a selected model. -
main_statistics.py
: Script for computing and compiling benchmark statistics across different models. -
statistics.ipynb
: notebook to reproduce statistics of the paper. -
qualitative.ipynb
: notebook to reproduce qualitative results of the paper. -
requirements.txt
: Dependencies required for the benchmark scripts and model evaluation.
To set up the environment for running the benchmark, execute the following commands:
conda create --name praco python=3.10
conda activate praco
pip install -r requirements.txt
This project uses the FSC-147 dataset for object counting. Download it from the following links:
- FSC-147 Dataset Download
- Image Descriptions FSC-147-D
- Put the dataset (zipped) and the FSC-147-D file into the
data
folder - Extract the dataset:
unzip FSC147_384_V2.zip
The model weights used in the paper can be downloaded from the respective authors' repositories and must be placed in pretrained_models/
which should be created in the project root.
Download links are provided below.:
- CounTX Weights: Download Link
- CLIP-Count Weights: Download Link
- VLCounter Weights: Download Link
- DAVE Weights: Download Link
- Download verification.pth
- Download and extract DAVE_0_shot.pth from models.zip
- TFPOC Weights: Download Link
- ZSC Weights: Download Link. Please note that we re-trained the model from scratch since the authors did not provid the model.
- PseCo Weights: Download Link
- Download clip_text_prompt.pth
- Download MLP_small_box_w1_zeroshot.tar and point_decoder_vith.pth from checkpoints folder
- CLIP weights: Download Link - Put it into the
models/VLCounter/pretrain
folder - ZSC regressor weights: Download Link - Put regressor.pth into the
models/ZSC/pretrain
folder
To evaluate a model on the PrACo benchmark, use the following command:
python main.py --model <MODEL_NAME> --data_dir <DATA_DIR> --img_directory <IMG_DIR> --split <SPLIT_NAME>
To run a model, use the following command:
python main.py --model CounTX --data_dir ./data --img_directory ./data/images_384_VarV2 --split test
To generate statistics and final metrics for the benchmark:
python main_statistics.py --data_dir <DATA_DIR> --split <SPLIT_NAME> --models <MODEL_NAME>
- CounTX
- CLIP-Count
- ClipSAM
- VLCounter
- DAVE
- ZSC
- PseCo