Skip to content

ciampluca/PrACo

Repository files navigation

PrACo: Prompt-Aware Counting Benchmark

Paper link: Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting qualitative_mosaics-1

Table of Contents

  1. Overview
  2. Repository Structure
  3. Installation
  4. Running the Benchmark
  5. Example
  6. Running Statistics
  7. Available Models

Overview

Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting introduces a new benchmark named PrACo (Prompt-Aware Counting Benchmark) designed to evaluate the performance of different class-agnostic prompt-guided counting models in counting the correct objects. In the paper, we demonstrated how state-of-the-art models fail to correctly understand the provided textual prompt, defaulting to the most prevalent object class instead. This repository includes the necessary scripts and instructions to run the benchmark and evaluate the models described in the paper.

Repository Structure

The repository is organized as follows:

  • benchmark/: Scripts for evaluating the models on the PrACo dataset.

  • models/: Contains the model implementations for:

    • CounTX
    • CLIP-Count
    • TFPOC
    • VLCounter
    • DAVE
    • ZSC
    • PseCo
  • main.py: Main script to run the benchmark for a selected model.

  • main_statistics.py: Script for computing and compiling benchmark statistics across different models.

  • statistics.ipynb: notebook to reproduce statistics of the paper.

  • qualitative.ipynb: notebook to reproduce qualitative results of the paper.

  • requirements.txt: Dependencies required for the benchmark scripts and model evaluation.

Installation

1. Create a Conda Environment

To set up the environment for running the benchmark, execute the following commands:

conda create --name praco python=3.10
conda activate praco
pip install -r requirements.txt

2. Download the FSC-147 Dataset

This project uses the FSC-147 dataset for object counting. Download it from the following links:

  • FSC-147 Dataset Download
  • Image Descriptions FSC-147-D
  • Put the dataset (zipped) and the FSC-147-D file into the data folder
  • Extract the dataset:
unzip FSC147_384_V2.zip

3. Download Pre-Trained Weights

The model weights used in the paper can be downloaded from the respective authors' repositories and must be placed in pretrained_models/ which should be created in the project root. Download links are provided below.:

  • CounTX Weights: Download Link
  • CLIP-Count Weights: Download Link
  • VLCounter Weights: Download Link
  • DAVE Weights: Download Link
    • Download verification.pth
    • Download and extract DAVE_0_shot.pth from models.zip
  • TFPOC Weights: Download Link
  • ZSC Weights: Download Link. Please note that we re-trained the model from scratch since the authors did not provid the model.
  • PseCo Weights: Download Link
    • Download clip_text_prompt.pth
    • Download MLP_small_box_w1_zeroshot.tar and point_decoder_vith.pth from checkpoints folder

4. Download Model-specific Files

  • CLIP weights: Download Link - Put it into themodels/VLCounter/pretrain folder
  • ZSC regressor weights: Download Link - Put regressor.pth into themodels/ZSC/pretrain folder

Running the Benchmark

To evaluate a model on the PrACo benchmark, use the following command:

python main.py --model <MODEL_NAME> --data_dir <DATA_DIR> --img_directory <IMG_DIR> --split <SPLIT_NAME>

Example

To run a model, use the following command:

python main.py --model CounTX --data_dir ./data --img_directory ./data/images_384_VarV2 --split test

Running Statistics

To generate statistics and final metrics for the benchmark:

python main_statistics.py --data_dir <DATA_DIR> --split <SPLIT_NAME> --models <MODEL_NAME>

Available Models

  • CounTX
  • CLIP-Count
  • ClipSAM
  • VLCounter
  • DAVE
  • ZSC
  • PseCo

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published