Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

segmentation fault when using CoreML #2615

Open
OutisLi opened this issue Dec 9, 2024 · 0 comments
Open

segmentation fault when using CoreML #2615

OutisLi opened this issue Dec 9, 2024 · 0 comments

Comments

@OutisLi
Copy link

OutisLi commented Dec 9, 2024

I use: ./models/generate-coreml-model.sh -h5 Belle-whisper-large-v3-zh BELLE-2/Belle-whisper-large-v3-zh
to convert a PyTorch model to coreml, and it runs successfully.
I also use: python3 models/convert-pt-to-ggml.py models/hf-Belle-whisper-large-v3-turbo-zh.pt whisper models/temp
to convert the PyTorch model to ggml in bin format.
image
The generated models can be seen from the above picture.
I then compiled this project using cmake -DWHISPER_COREML=1 .. and make -j.
Finally, I tried using: ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f /Users/outisli/Downloads/test.WAV
The error is shown below:
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-Belle-whisper-large-v3-turbo-zh.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: devices = 3
whisper_init_with_params_no_state: backends = 3
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 4
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_default_buffer_type: using device Metal (Apple M4 Pro)
whisper_model_load: Metal total size = 1623.92 MB
[1] 45706 segmentation fault ./build/bin/main -m models/ggml-Belle-whisper-large-v3-turbo-zh.bin -f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant