Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VRAM memory leak for Refact.AI 1.6B #332

Closed
tawek opened this issue Mar 7, 2024 · 10 comments
Closed

VRAM memory leak for Refact.AI 1.6B #332

tawek opened this issue Mar 7, 2024 · 10 comments

Comments

@tawek
Copy link

tawek commented Mar 7, 2024

Windows 11 fully updated.
WSL2 updated.
Docker Desktop for Windows latest , GPU works in docker. nvidia-sli reports the GPU fine.
Nvidia Cuda 12.2 Toolkit
Newest Nvidia drivers.
RTX 3080 10GB VRAM.
AMD R5800X3D 32GB RAM
No other GPU software running.

At first all looks good, model loads and is serving, but after some time memory utilization grows to 10GB and then GPU load stays at 100% for prolonged times, model times out I can only restart the docker container to fix it.
Actually it rises to 10GB of VRAM use pretty quickly. This is for 1.6B Refact.ai model.

Docker runs 'thenlper/gte-base' as well. When I delete it to gain a little VRAM, the responsiveness comes back for just a couple of queries more.

JetBrains IDEA Refact.AI plugin.

@olegklimov
Copy link
Contributor

Thanks for reporting. I don't think we do anything that can cause memory leaks. Hmm maybe it's the torch version or cuda version or something like this 🤔

@d3v2a
Copy link

d3v2a commented Apr 8, 2024

Same with deepseek-coder/1.3b/base (finetune), start at ~3GB and after one hour, up to 7gb.
When I change models, the memory is freed up and the model loads at 3 GB.

Os: linux mint
cuda: 12.3
Driver version: 545.29.06
docker: 26.0.0
Gpu: NVIDIA GeForce RTX 4060 Ti 16GO
AMD Ryzen 7 5800X, 64GO ram

@olegklimov
Copy link
Contributor

I'll try to reproduce

@olegklimov
Copy link
Contributor

I left 1.6b (regular backend) for a day, memory settled on 6.19 Gb of memory RAM. I additionally sent 750 completion requests today and it's still 6.19 GB. I don't say there's no leak, I can only say I tried and I don't see a leak in my setup 🤔

Not sure what to do...

@olegklimov
Copy link
Contributor

Called for help from @mitya52

@mitya52
Copy link
Member

mitya52 commented Apr 11, 2024

@d3v2a it looks like normal behavior. On start model allocates 3gb but when you start using it with large context (on large files for example) it allocates additional memory for it. I see no memory leaks with your case.

@olegklimov
Copy link
Contributor

hmm now I see 11.9Gb on my setup 🤔

@d3v2a
Copy link

d3v2a commented May 20, 2024

The problem seems not to be present in the last version
refact 1.6.1
refact-lsp 0.8.0

@olegklimov
Copy link
Contributor

Cool!

@tawek
Copy link
Author

tawek commented May 24, 2024

I've updated to latest sha256:f1968874 and it works ok. Usage stabilized around 9.6 on 10GB VRAM GPU and there are no issues as it seems.

@tawek tawek closed this as completed May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants