-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GigaSpeech 2 recipe #1365
Add GigaSpeech 2 recipe #1365
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!! The recipe looks good to me, although I have one suggestion. If you could re-use the streaming manifest writing mechanism from GigaSpeech 1 recipe, it would allow users to prepare this dataset with minimal memory usage. As-is, it will take a lot of CPU memory to hold the entire manifest in memory before writing it to disk. See:
lhotse/lhotse/recipes/gigaspeech.py
Lines 96 to 129 in da4d70d
with RecordingSet.open_writer( | |
output_dir / f"gigaspeech_recordings_{part}.jsonl.gz" | |
) as rec_writer, SupervisionSet.open_writer( | |
output_dir / f"gigaspeech_supervisions_{part}.jsonl.gz" | |
) as sup_writer, CutSet.open_writer( | |
output_dir / f"gigaspeech_cuts_{part}.jsonl.gz" | |
) as cut_writer: | |
for recording, segments in tqdm( | |
parallel_map( | |
parse_utterance, | |
gigaspeech.audios("{" + part + "}"), | |
repeat(gigaspeech.gigaspeech_dataset_dir), | |
num_jobs=num_jobs, | |
), | |
desc="Processing GigaSpeech JSON entries", | |
): | |
# Fix and validate the recording + supervisions | |
recordings, segments = fix_manifests( | |
recordings=RecordingSet.from_recordings([recording]), | |
supervisions=SupervisionSet.from_segments(segments), | |
) | |
validate_recordings_and_supervisions( | |
recordings=recordings, supervisions=segments | |
) | |
# Create the cut since most users will need it anyway. | |
# There will be exactly one cut since there's exactly one recording. | |
cuts = CutSet.from_manifests( | |
recordings=recordings, supervisions=segments | |
) | |
# Write the manifests | |
rec_writer.write(recordings[0]) | |
for s in segments: | |
sup_writer.write(s) | |
cut_writer.write(cuts[0]) |
Sure, I will implement this later. |
If you don't have the time to implement the adjustment, we can merge this as-is instead. |
Sorry, I accidentally pushed to master earlier, and renaming the branch yesterday caused the PR to close automatically. I'll open a new PR with the streaming version later today. |
This PR adds a recipe for GigaSpeech 2.
GigaSpeech 2 raw comprises about 30,000 hours of automatically transcribed speech across Thai, Indonesian, and Vietnamese. GigaSpeech 2 refined consists of 10,000 hours of Thai, 6,000 hours each for Indonesian and Vietnamese. GigaSpeech 2 test sets more realistically reflect speech recognition scenarios and mirror the real performance of an ASR system for low-resource languages.
For more details, please visit:
Dataset: https://huggingface.co/datasets/speechcolab/gigaspeech2
Preprint paper: https://arxiv.org/pdf/2406.11546