This repository contains an alternative and independent implementation of the framework Keyword Miner, originally discussed in the paper SiDi KWS: A Large-Scale Multilingual Dataset for Keyword Spotting. The goal of this repository is to allow the keyword-spotting community to export large sets of single-spoken keyword audio files based on the forced alignment of transcribed input speech recordings, as described in that paper.
Given a speech audio file and its transcript, KeywordMiner runs MFA to force-align that transcript text to the recorded speech, which outputs the start and end times of each spoken word. Based on that information, KeywordMiner segments each keyword as an individual labeled audio file. The following sessions presented more details about this implementation of KeywordMiner.
-
Datasets: The audio files to be segmented must accompany an associated text file with the transcript of the speech recording.
-
Supported languages: Any of the same languages supported by the MFA models, as prompted on this list.
-
Audio file format: Any (audio) format is supported since the files will be converted to .wav format.
Note: The input datasets must be compatible with one of the structure formats featured by the implemented TranscribedDataset interface (i.e. LibriSpeech or Mozilla Common Voice).
The following are the steps to segment spoken words from Librispeech's dev-clean corpus via KeywordMiner:
-
Create a virtual environment with MFA via conda:
conda create -n keyword-miner montreal-forced-aligner
-
Activate the previous virtual environment:
conda activate keyword-miner
-
Install the requirements:
pip install -r requirements.txt --upgrade
-
Install the pretrained acoustic model:
mfa model download acoustic english_us_arpa
-
Install its respective dictionary:
mfa model download dictionary english_us_arpa
-
Set
input_dir_path
ininputs/configs/local_librispeech.conf
as the local path to Librispeech's corpus -
Run the main script:
python main.py
To run KeywordMiner with input corpus in other languages, please consult MFA's models page and check the available pre-trained acoustic models and dictionaries.
This project's folders are structured as follows:
root/
└── datasets/
└── inputs/
├── configs/
└── lexicons/
└── source/
Those folders have the following properties:
-
datasets/: holds the implementation responsible for matching the transcript files to match the structure required by the aligner.
-
inputs/configs/: stores the configuration files with the arguments used by this project.
-
inputs/lexicons/: contains the pronunciation dictionaries for each language.
-
source/: holds this project's main source code.
Note: This project also creates a local directory named outputs, which is listed in file .gitignore. That directory's subfolders hold the outputs of this project (e.g., segmented audio files).
Currently, this project considers three main entities: TranscribedDataset, Aligner, Segmenter. All of them are shortly described below:
- TranscribedDataset: Represents an interface for the different formats compatible with each dataset (LibriSpeech, Mozilla Common Voice).
- Aligner: Points the timestamp of single words in the recordings according to its transcripts.
- Segmenter: Uses the alignment to segment the recordings into keywords.
This repository is available under the MIT license. In case you publish any research based on results obtained with this repository, please consider citing the authors of the original paper:
@inproceedings{meneses22_interspeech,
author={Michel Cardoso Meneses and Rafael Bérgamo Holanda and Luis Vasconcelos Peres and Gabriela Dantas Rocha},
title={SiDi KWS: A Large-Scale Multilingual Dataset for Keyword Spotting},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4616--4620},
doi={10.21437/Interspeech.2022-394}
}
If you have any questions about this project, please open an issue ticket on this repository or contact me via e-mail: [email protected]