-
Notifications
You must be signed in to change notification settings - Fork 41
Rethink DB consumes all memory in Docker container #38
Comments
There really is a memory leak, Already planned this fix in new version. |
Are you using the write flush feature? |
I am observing the same behavior. I have the cache-size limit of configured at 1GB and see the memory usage grow way over that limit, until all available memory is used. Is there any information available on the memory leak? Is it known what triggers it or how to quickly reproduce it? I see mention of a fix, has there done any work on this already? |
Hey folks, Is anyone working on this? Any ETA? |
FINALLY, A solution for my problem of infinity increasing memory, I found that I was doing :
at every update, replace and delete, then I changed it to :
and memory (RAM) stopped increasing.
|
With a Docker container given 700 MB of memory, after about 14 hours, Rethink DB consumes all available memory, and crashes the container.
When looking through monitoring, after about 2 hours of the container running, it seems that the memory usage grows linearly, until eventually, all memory is used and the container crashes. Could this be a memory leak?
After looking at this documentation,
"If there is less than 1224 MB of memory available on the system, a minimum cache size limit of 100 MB is used."
leads me to believe that all 700 MB of memory should not be consumed by the cache and there is a potential leak.
Background:
Running the basic
rethinkdb
image from Docker HubRethink DB meta data is < 100 MB
A Node JS app primarily interacts with this Rethink DB instance
Any thoughts on this would be greatly helpful! Thank you!
The text was updated successfully, but these errors were encountered: