You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use: ./models/generate-coreml-model.sh -h5 Belle-whisper-large-v3-zh BELLE-2/Belle-whisper-large-v3-zh
to convert a PyTorch model to coreml, and it runs successfully.
I also use: python3 models/convert-pt-to-ggml.py models/hf-Belle-whisper-large-v3-turbo-zh.pt whisper models/temp
to convert the PyTorch model to ggml in bin format.
The generated models can be seen from the above picture.
I then compiled this project using cmake -DWHISPER_COREML=1 .. and make -j.
Finally, I tried using: ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f /Users/outisli/Downloads/test.WAV
The error is shown below:
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-Belle-whisper-large-v3-turbo-zh.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: devices = 3
whisper_init_with_params_no_state: backends = 3
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 4
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_default_buffer_type: using device Metal (Apple M4 Pro)
whisper_model_load: Metal total size = 1623.92 MB
[1] 45706 segmentation fault ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f
The text was updated successfully, but these errors were encountered:
I use: ./models/generate-coreml-model.sh -h5 Belle-whisper-large-v3-zh BELLE-2/Belle-whisper-large-v3-zh
to convert a PyTorch model to coreml, and it runs successfully.
I also use: python3 models/convert-pt-to-ggml.py models/hf-Belle-whisper-large-v3-turbo-zh.pt whisper models/temp
to convert the PyTorch model to ggml in bin format.
The generated models can be seen from the above picture.
I then compiled this project using cmake -DWHISPER_COREML=1 .. and make -j.
Finally, I tried using: ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f /Users/outisli/Downloads/test.WAV
The error is shown below:
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-Belle-whisper-large-v3-turbo-zh.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: devices = 3
whisper_init_with_params_no_state: backends = 3
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 4
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_default_buffer_type: using device Metal (Apple M4 Pro)
whisper_model_load: Metal total size = 1623.92 MB
[1] 45706 segmentation fault ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f
The text was updated successfully, but these errors were encountered: