This is the code for the paper HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning, accepted by ECCV 2024 [Project Page].
- [2024/08/05] 🚀 PYPI package is released.
- [2024/07/29] 🔥 HYDRA is open sourced in GitHub.
We realize that gpt-3.5-turbo-0613
is deprecated, and gpt-3.5
will be replaced by gpt-4o-mini
. We will release another version of HYDRA.
As of July 2024,
gpt-4o-mini
should be used in place ofgpt-3.5-turbo
, as it is cheaper, more capable, multimodal, and just as fast Openai API Page.
We also notice the embedding model is updated by OpenAI as shown in this link. Due to the uncertainty of the embedding model updates from OpenAI, we suggest you train a new version of the RL controller yourself and update the RL models.
- GPT-4o-mini replacement.
- LLaMA3.1 (ollama) replacement.
- Gradio Demo
- GPT-4o Version.
- HYDRA with RL
- Python >= 3.10
- conda
Please follow the instructions below to install the required packages and set up the environment.
git clone https://github.com/ControlNet/HYDRA
Option 1: Using pixi (recommended):
pixi run install
pixi shell
Option 2: Building from source:
bash -i build_env.sh
If you meet errors, please consider going through the build_env.sh
file and install the packages manually.
Edit the file .env
or setup in CLI to configure the environment variables.
OPENAI_API_KEY=your-api-key
OLLAMA_HOST=http://ollama.server:11434
# do not change this TORCH_HOME variable
TORCH_HOME=./pretrained_models
Run the scripts to download the pretrained models to the ./pretrained_models
directory.
python -m hydra_vl4ai.download_model --base_config <EXP-CONFIG-DIR> --model_config <MODEL-CONFIG-PATH>
For example,
python -m hydra_vl4ai.download_model --base_config ./config/okvqa.yaml --model_config ./config/model_config_1gpu.yaml
A worker is required to run the inference.
python -m hydra_vl4ai.executor --base_config <EXP-CONFIG-DIR> --model_config <MODEL-CONFIG-PATH>
python demo_cli.py \
--image <IMAGE_PATH> \
--prompt <PROMPT> \
--base_config <YOUR-CONFIG-DIR> \
--model_config <MODEL-PATH>
python demo_gradio.py \
--base_config <YOUR-CONFIG-DIR> \
--model_config <MODEL-PATH>
python main.py \
--data_root <YOUR-DATA-ROOT> \
--base_config <YOUR-CONFIG-DIR> \
--model_config <MODEL-PATH>
Then the inference results are saved in the ./result
directory for evaluation.
python evaluate.py <RESULT_JSON_PATH> <DATASET_NAME>
For example,
python evaluate.py result/result_okvqa.jsonl okvqa
@inproceedings{ke2024hydra,
title={HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning},
author={Ke, Fucai and Cai, Zhixi and Jahangard, Simindokht and Wang, Weiqing and Haghighi, Pari Delir and Rezatofighi, Hamid},
booktitle={European Conference on Computer Vision},
year={2024},
organization={Springer},
doi={10.1007/978-3-031-72661-3_8},
isbn={978-3-031-72661-3},
pages={132--149},
}
Some code and prompts are based on cvlab-columbia/viper.