-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hasura metadata apply
uses massive amount of memory, ignores kubernetes resource limits
#10601
Comments
Hi, @raweber42 👋 I've reached out to our SRE team to see what guidance they can give on this behavior. |
The CPU and memory limits are enforced for a Kubernetes container in different ways. When it comes to CPU, the kernel can actually limit the clock cycles allocated to a process, ensuring that a process does not get more than what is allocated to it as the CPU limits. If the memory does not grow as required by a process, then that process would simply hang. So kernel ensures that the process won't use more memory than specified by sending an Out Of Memory (OOM) event when the memory usage exceeds the limit. This results in an OOMKill of the process. So the CPU limit will limit the CPU usage, while the memory limit will OOMKill the process if it exceeds the limit specified. This is the same behavior, whether the process is run as a Kubernetes container with memory limits, or as a systemd process with memory limits. |
Regarding the high memory usage of |
Thanks @nizar-m for getting back! I get why the container is being killed by kubernetes, thank you. The question is: Sadly, I cannot share our metadata just like this, but I can give you some more insights regarding what they contain. Our metadata includes:
Here is an example of a
Additionally we use:
I would be very glad to get some insights on how hasura handles such an - admittedly big - amount of metadata. And if you could give me a recommendation on how to find the right memory limit (I tried trial and error, but as described above, the memory usage does not seem to be very predictable. When not setting a limit, I can see that the memory usage is around 1.5Gi. But even when setting it to 2Gi, the container gets OOM-killed 😅), I would be very grateful! |
The memory usage could be 1.5Gi according to the collected metrics. But during metadata apply, it might be exceeding the 2Gi limit for a brief period of time, resulting in an OOMKill. Hasura builds in-memory structures for quickly serving the queries, with size roughly proportional to I get that the metadata apply is taking a lot of memory. During the design of |
Version Information
Server Version:
CLI Version (for CLI related issue): hasura/graphql-engine:v2.44.0
Environment
Self-hosted
What is the current behaviour?
When running
hasura metadata apply
in a kubernetes deployment, it gets killed because it is hitting my specifiedmemory limit
. This is running in a seperate container next to the actual hasura container.I even went up with the limit to as much as
3Gi
, but the container seems to ignore this limit, trying to get all memory possible (I guess it sees the memory limit of the kubernetes node, not the container inside of the pod lives in. When not specifying alimit
whatsoever, it works. I can see, that the memory usage spikes to1.5Gi
max during runninghasura metadata apply
. So specifying a limit of2Gi
should be sufficient. But what (I think) happens, is, that our cluster spins up a new node such that there is even more memory available. And theapply
command just tries to grab it all, ignoring the specifiedlimit
of the kubernetes container.So this does not work:
But this does:
For the record: We don't have a crazy amount of metadata. Couple of remote schemas and ~12 DBs with permissions.
What is the expected behaviour?
metadata apply
stays within thelimit
specified for memory in the deployment manifest. And probably should also not use such a high amount of memory in the first place.How to reproduce the issue?
hasura-cli
installed)256Mi
)hasura metadata apply
command with a reasonable amount of metadata to apply.1Gi
hasura metadata apply
command againScreenshots or Screencast
Please provide any traces or logs that could help here.
Any possible solutions/workarounds you're aware of?
Don't set a memory
limit
at all.Keywords
memory, apply, metadata
The text was updated successfully, but these errors were encountered: