-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] agent Dockerfile started crashing #23048
Comments
I believe we are seeing similar issues using the Datadog Agent as a sidecar on Fargate. The Datadog Agent exits with a 137 status code. We are also using ruby dd-trace, hopefully that's helpful in narrowing down the issue. |
I'm having this issue as well. I'm pinning the container to version |
edit: nevermind, we just saw another crash so the suggested fix did not help. |
Our experience is that pinning the agent version keeps the ECS task from failing, but we can still see in the logs the issue between the containers:
|
Not able to use the
Currently using |
We have the same problem with "latest" version running in AWS ECS Fargate sidecar container. Needed to use 7.50.3 to stop crashing our applications. |
Hi @modosc @JoshuaSchlichting @LukaszBancarz @praveensudharsan @bjclark13 Thanks for creating and commenting this issue. However it seems that different issues are happening. |
@clamoriniere i've got 1566759 opened currently. |
Support case 1582012 for me |
It seems it connected with update of s6-overlay to in /etc/s6/init/init-stage1 there are additional block that fails
|
Agent Environment
we're using
public.ecr.aws/datadog/agent:latest
in a sidecar container and deploying toecs
.we also have the following configuration setup via env variables:
Describe what happened:
this has worked fine for at least 6 months. since ~2024-02-18 we've started seeing these failures when attempting to deploy:
this causes our deploys to fail. re-running usually resolves this, although today these have happened more and more frequently.
here's a full log entry:
Describe what you expected:
this shouldn't happen?
Steps to reproduce the issue:
see above. we cannot 100% reliably trigger this.
Additional environment details (Operating System, Cloud provider, etc):
aws
linux
also using the lambda log forwarder
is there more debugging we can enable on the
dd
side to understand what's going on?was a change to this docker image pushed out?
The text was updated successfully, but these errors were encountered: