If you are interested in building up your research on this work, please cite:
@inproceedings{sigir20,
author = {Kai Luo and Hojin Yang and Ga Wu and Scott Sanner},
title = {Deep Critiquing for VAE-based Recommender Systems},
booktitle = {Proceedings of the 43rd International {ACM} SIGIR Conference on Research and Development in Information Retrieval {(SIGIR-20)}},
address = {Xi'an, China},
year = {2020}
}
- Critiquable and Explainable Variational Autoencoder (CE-VAE)
- Amazon CDs&Vinyl,
- Beer Advocate,
We don't have rights to release the datasets. Please ask permission from Professor Julian McAuley.
Please refer to the preprocess
folder for preprocessing raw datasets steps.
Keyphrases we used are not necessarily the best. If you are interested in how we extracted those keyphrases, please refer to the preprocess
folder. If you are interested in what keyphrases we extracted, please refer to the data
folder.
python main.py --model CE-VAE --data_dir data/beer/ --epoch 300 --rank 100 --beta 0.001 --lambda_l2 0.0001 --lambda_keyphrase 0.01 --lambda_latent 0.01 --lambda_rating 1.0 --learning_rate 0.0001 --corruption 0.4 --topk 10 --disable_validation
Please check out the cluster_bash
and local_bash
folders for all commands details. Below are only example commands.
python tune_parameters.py --data_dir data/beer/ --save_path beer_rating_tuning/ce_vae_tuning_part1.csv --parameters config/beer/ce-vae-tune-rating/ce-vae-part1.yml
python reproduce_general_results.py --data_dir data/beer/ --tuning_result_path beer_rating_tuning --save_path beer_rating_final/beer_final_result1.csv
python tune_parameters.py --data_dir data/beer/ --save_path beer_explanation_tuning/ce_vae_tuning_part1.csv --parameters config/beer/ce-vae-tune-keyphrase/ce-vae-part1.yml --tune_explanation
python reproduce_general_results.py --data_dir data/beer/ --tuning_result_path beer_explanation_tuning --save_path beer_explanation_final/beer_final_explanation_result1.csv --final_explanation
Find the hyperparameter set that is good for both rating and keyphrase prediction from tuning results for each dataset and put them in folder tables/critiquing_hyperparameters/beer/hyper_parameters.csv
and tables/critiquing_hyperparameters/CDsVinyl/hyper_parameters.csv
. Then run the following command.
python reproduce_critiquing.py --data_dir data/beer/ --model_saved_path beer --load_path explanation/beer/hyper_parameters.csv --num_users_sampled 1000 --save_path beer_fmap/beer_Critiquing
For baselines we used, please refer to Noise Contrastive Estimation Projected Linear Recommender(NCE-PLRec).