Skip to content

Latest commit

 

History

History
22 lines (18 loc) · 1.01 KB

README.md

File metadata and controls

22 lines (18 loc) · 1.01 KB

pytorch-wavenet

This is an implementation of the WaveNet architecture, as described in the original paper. Updated to work on PyTorch 1.6.

Features

  • Automatic creation of a dataset (training and validation/test set) from all sound files (.wav, .aiff, .mp3) in a directory
  • Efficient multithreaded data loading
  • Logging to TensorBoard (Training loss, validation loss, validation accuracy, parameter and gradient histograms, generated samples)
  • Fast generation, as introduced here

Requirements

  • python 3
  • pytorch 1.6
  • numpy
  • librosa
  • jupyter
  • tensorflow for TensorBoard logging

Demo

For an introduction on how to use this model, take a look at the WaveNet demo notebook. You can find audio clips generated by a simple trained model in the generated samples directory