Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running on macos / mps #35

Open
Chris-toe-pher opened this issue Oct 25, 2023 · 5 comments
Open

Running on macos / mps #35

Chris-toe-pher opened this issue Oct 25, 2023 · 5 comments

Comments

@Chris-toe-pher
Copy link

Segment anything can be run on mps with good acceleration vs cpu. This demo fails to run when loading the deva model - "torch not compiled with cuda enabled". Is it possible to support mps?

Traceback (most recent call last):
File "/Users/chris/Documents/AI/segmentation/Tracking-Anything-with-DEVA/demo/demo_automatic.py", line 34, in
deva_model, cfg, args = get_model_and_config(parser)
File "/Users/chris/Documents/AI/segmentation/Tracking-Anything-with-DEVA/deva/inference/eval_args.py", line 65, in get_model_and_config
network = DEVA(config).cuda().eval()
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/nn/modules/module.py", line 905, in
return self._apply(lambda t: t.cuda(device))
File "/Users/chris/miniconda3/envs/TA-DEVA/lib/python3.9/site-packages/torch/cuda/init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

@hkchengrex
Copy link
Owner

I don't have a Mac device and cannot test it but I suppose you can replace all the .cuda() and cuda.amp calls with mps equivalent.

@Chris-toe-pher
Copy link
Author

If you don't have a Mac that is a good reason 👍 .
I tried replacing the cuda specific things, also there are some datatypes that aren't supported on mps like int64... and some functions that fallback to cpu. I got as far as it running but with an error from MPSGraphUtilities.mm scrolling by. After 10 minutes, the progress bar showed 2 images had been completed, then it quit with an error, and no output images. It looks like it is using up more than the 64GB of unified memory, I'm not sure I can fix it further without understanding each part that's being used.

@hkchengrex
Copy link
Owner

If you try the example video, i.e.,

python demo/demo_with_text.py --chunk_size 4 \
--img_path ./example/vipseg/images/12_1mWNahzcsAc \ 
--amp --temporal_setting semionline \
--size 480 \
--output ./example/output --prompt person.hat.horse

, it is quite unlikely that it uses more than a few GB of memory.

If it processed two frames without any output images, it sounds like the error occurred during consensus. You can test using the online mode (consensus disabled).

@Chris-toe-pher
Copy link
Author

I was running the other one - demo_automatic.py
The demo_with_text has a different error from GroundingDINO, I'll have a look into that tomorrow.

@hkchengrex
Copy link
Owner

Ref: hkchengrex/Cutie#14

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants