Skip to content
This repository has been archived by the owner on May 7, 2022. It is now read-only.

Rethink DB consumes all memory in Docker container #38

Open
jpmcb opened this issue Oct 1, 2018 · 5 comments
Open

Rethink DB consumes all memory in Docker container #38

jpmcb opened this issue Oct 1, 2018 · 5 comments

Comments

@jpmcb
Copy link

jpmcb commented Oct 1, 2018

With a Docker container given 700 MB of memory, after about 14 hours, Rethink DB consumes all available memory, and crashes the container.

When looking through monitoring, after about 2 hours of the container running, it seems that the memory usage grows linearly, until eventually, all memory is used and the container crashes. Could this be a memory leak?

After looking at this documentation,
"If there is less than 1224 MB of memory available on the system, a minimum cache size limit of 100 MB is used."
leads me to believe that all 700 MB of memory should not be consumed by the cache and there is a potential leak.

Background:
Running the basic rethinkdb image from Docker Hub
Rethink DB meta data is < 100 MB
A Node JS app primarily interacts with this Rethink DB instance

Any thoughts on this would be greatly helpful! Thank you!

@medeirosfalante
Copy link

There really is a memory leak, Already planned this fix in new version.

@thelinuxlich
Copy link

Are you using the write flush feature?

@apollux
Copy link

apollux commented Dec 12, 2018

I am observing the same behavior. I have the cache-size limit of configured at 1GB and see the memory usage grow way over that limit, until all available memory is used.

Is there any information available on the memory leak? Is it known what triggers it or how to quickly reproduce it? I see mention of a fix, has there done any work on this already?

@omesser
Copy link

omesser commented Nov 23, 2019

Hey folks, Is anyone working on this? Any ETA?

@Khalilbz
Copy link

FINALLY, A solution for my problem of infinity increasing memory, I found that I was doing :

...run(con, {durability: 'hard'});

at every update, replace and delete, then I changed it to :

...run(con);

and memory (RAM) stopped increasing.

The strategy I followed, is to clean my code from non-standard scenarios acts.
Definition of standard here is: Create/Read/Update/Delete/Listen for changes/Filter
I believe that this area of scenarios is heavily tested for an old framework like RethinkDB, So I'm safe if I don't leave it

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants