Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA memory leak for Flux.Optimizer #148

Open
RomeoV opened this issue May 30, 2023 · 1 comment
Open

CUDA memory leak for Flux.Optimizer #148

RomeoV opened this issue May 30, 2023 · 1 comment

Comments

@RomeoV
Copy link
Contributor

RomeoV commented May 30, 2023

(This issue has been moved here from FluxML/Flux.jl#2261)

I have a somewhat complicated training setup and have recently started encountering CUDA-out-of-memory issues which only show up after a number of epochs.

I have managed to construct a minimum working example here:

using Flux
using FastAI
using MLUtils
using FastAI.FluxTraining

function main()
    DEVICE = gpu
    model = Chain(Dense(32*32*3=>2048), Dense(2048=>6), Dense(6, 32*32*3))

    make_data_sample_test(i) = (rand(Float32, 32*32*3),
                                rand(Float32, 32*32*3))
    data = mapobs(make_data_sample_test, 1:1024)
    dl     = DataLoader(data; batchsize=32, collate=true)

    loss = Flux.Losses.logitbinarycrossentropy
    opt = Flux.Adam(3e-4)
    learner = FastAI.Learner(model, loss;
                             optimizer=opt,
                             data=(dl, dl_val),
                             callbacks=[FluxTraining.ToGPU(), ])

    for _ in 1:5
      FluxTraining.epoch!(learner, FluxTraining.TrainingPhase())
      @show length(opt.state)
    end
end

After about 50 epochs (~1 minute on my laptop), I get an error that CUDA cannot allocate any more memory.
This seems to be because in the optimizer, the state variable accumulates GPU Arrays over time.

The issue can be fixed by replacing opt = Flux.Adam() with opt = Optimizers.Adam(). However, I think we should fix the problem for the Flux optimizer, since it seems to be "officially" supported.

@DrChainsaw has suggested in the other issue that the problem is that the ToDevice callback is not applied to the optimizer parameters. However I haven't looked at the specifics, and how one would fix that. Any insights?

@ToucheSir
Copy link
Member

I think this is the sequence of events which causes the leak:

  1. Once per epoch, the model is moved from CPU to GPU. This means the identity of the GPU model parameters will vary between epochs.
  2. Subsequently, the optimizer state is initialized from scratch based on the GPU model params, but only when using Optimisers.jl (because state is held externally to the optimization rules themselves). When using legacy Flux optimizers, the optimizer retains the now-obsolete state from the last epoch unchanged.
  3. When it comes time to update parameters, the state IdDict legacy optimizers use is expanded instead of updated as intended because the object identity of the params have changed.
  4. Rinse and repeat over multiple epochs.

There are a couple of ways we could address this, but I think it first raises a bigger question: why are we resetting the optimizer state at the beginning of each epoch in the first place? @lorenzoh do you remember the context for this decision?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants