Skip to content

suhitaghosh10/emo-stargan

Repository files navigation

Emo-StarGAN

This repository contains the source code of the paper Emo-StarGAN: A Semi-Supervised Any-to-Many Non-Parallel Emotion-Preserving Voice Conversion, accepted in Interspeech 2023. An overview of the method and the results can be found here.

Concept of our method. For details we refer to our paper at .....

Highlights:

Samples

Samples can be found here.

Demo

The demo can be found at Demo/EmoStarGAN Demo.ipynb.

Pre-requisites:

  1. Python >= 3.9
  2. Install the python dependencies mentioned in the requirements.txt

Training:

Before Training

  1. Before starting the training, please specify the number of target speakers in num_speaker_domains and other details such as training and validation data in the config file.
  2. Download VCTK and ESD datasets. For VCTK dataset preprocessing is needed, which can be carried out using Preprocess/getdata.py. The dataset paths need to be adjusted in train train_list.txt and validation val_list.txt lists present in Data/.
  3. Download and copy the emotion embeddings weights to the folder Utils/emotion_encoder
  4. Download and copy the vocoder weights to the folder Utils/Vocoder

Train

python train.py --config_path ./Configs/speaker_domain_config.yml

Model Weights

The Emo-StarGAN model weight can be downloaded from here.

Common Errors

When the speaker index in train_list.txt or val_list.txt is greater than the number of speakers ( the hyperparameter num_speaker_domains mentioned in speaker_domain_config.yml), the following error is encountered:

[train]:   0%| | 0/66 [00:00<?, ?it/s]../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.

Also note that the speaker index starts with 0 (not with 1!) in the training and validation lists.

References and Acknowledgements

Releases

No releases published

Packages

No packages published

Languages