Mac m1 #7
Replies: 8 comments 3 replies
-
@zfarrell13 Hello, Thank you for your question... Are you talking about inference or training? If training, it is not really practical, except maybe fine-tuning. If you are talking about inference, it would be ok I guess but it would be really slow. I do not plan to make CPU-only version as of now. But it is actually not hard to do... What you can try is to replace .cuda() with .cpu() in the all of the code and it technically should work. You will need to change that in the colabs and also in the transformer module: I am sorry if my answer is disappointing but I hope it is still somehow helpful. Alex. |
Beta Was this translation helpful? Give feedback.
-
@zfarrell13 Thank you for appreciating my work :) It means a lot to me :) I gave it a thought and making a CPU version may be a good idea and an addition for this project so let me know if you will be able to do it or if you will run into problems doing it and I will try to help you out! Alex |
Beta Was this translation helpful? Give feedback.
-
@zfarrell13 I had some time today so I figured I'd help you out to do what you need. Basically, for ALL of my projects all you need to do to make them run on CPU are the following changes:
And this is it!!! No changes are needed to the modules or anything like that. Please see attachment for Los Angeles Music Composer (Composer Colab). Works great but slow on my end! :) Hopefully it will help you :) Alex. |
Beta Was this translation helpful? Give feedback.
-
Ive also been experimenting with basic-pitch from spotify. its very hard to get high quality mp3->midi conversions. i figure for inference/improvisation, its important to have a good starting point. of course i could make clean midi fiiles on my own via my DAW, and start there, but it would be valuable for me to be able to take a clean bassline MIDI directly from the stem of a song, and get a good improvisation. workflow would be very quick for jamming. |
Beta Was this translation helpful? Give feedback.
-
@zfarrell13 You are welcome :) Definitely feel free to reach out to me for any help with any of my stuff. I am always happy to help. :) Since it seems that you want to train on CPU, you might want to consider using something faster than Local Windowed Attention Transformer implementation. I highly suggest for you to use basic transformer which I used for my other projects: And for actual implementation, please see my Allegro Music Transformer. It is my best model/implementation overall. I have no idea why Los Angeles Music Composer is so popular tbh :) https://github.com/asigalov61/Allegro-Music-Transformer And for audio to MIDI transcription, please do not use basicpitch (it sucks). Instead, try Google stuff: https://github.com/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipynb Hope this helps. Alex |
Beta Was this translation helpful? Give feedback.
-
@zfarrell13 You are welcome :) |
Beta Was this translation helpful? Give feedback.
-
going to start a new discussion over in that repo as i will be doing more research there per your suggestion |
Beta Was this translation helpful? Give feedback.
-
You might also look into using MPS to take advantage of the built-in M1 GPU. @zfarrell13 |
Beta Was this translation helpful? Give feedback.
-
Hello, i noticed CUDA was a requirement here as i am trying to work with this repo on an M1. is there any plan to make this available for CPU?
Beta Was this translation helpful? Give feedback.
All reactions