Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extension not working in AWS ECS environment #15

Open
juliolugo96 opened this issue Feb 1, 2023 · 3 comments
Open

Extension not working in AWS ECS environment #15

juliolugo96 opened this issue Feb 1, 2023 · 3 comments

Comments

@juliolugo96
Copy link

juliolugo96 commented Feb 1, 2023

Hi, @br41nslug !

Your extension is awesome and it's working perfectly fine for me locally (before and after dockerization). However, right after I deploy it to an AWS fargate-managed ECS cluster, the task is launched but after some minutes, the container goes to a CPU max usage and it basically gets stuck:

Here's the environment configuration:

ACCESS_TOKEN_TTL=60m
[email protected]
ADMIN_PASSWORD=password123
ASSETS_CACHE_TTL=30m
CACHE_ENABLED=false
CACHE_STORE=memory
CORS_ALLOWED_HEADERS=Content-Type,Authorization
CORS_CREDENTIALS=true
CORS_ENABLED=true
CORS_EXPOSED_HEADERS=Content-Range
CORS_MAX_AGE=18000
CORS_METHODS=GET,POST,PATCH,DELETE
CORS_ORIGIN=true
DB_CLIENT=pg
DB_DATABASE=MY-DB
DB_HOST=MY-DB-HOST
DB_PASSWORD=MY-PASSWORD!
DB_PORT=5432
DB_USER=MY=DB=USER
[email protected]
EMAIL_SENDMAIL_NEW_LINE=unix
EMAIL_SENDMAIL_PATH=/usr/sbin/sendmail
EMAIL_TRANSPORT=sendmail
EXTENSIONS_AUTO_RELOAD=false
EXTENSIONS_PATH=./extensions
HOST=0.0.0.0
KEY=SOME-RANDOM-KEY
PORT=8056
PUBLIC_URL=MY-URL
REFRESH_TOKEN_COOKIE_DOMAIN=MY-URL
REFRESH_TOKEN_COOKIE_NAME=directus_refresh_token
REFRESH_TOKEN_COOKIE_SAME_SITE=none
REFRESH_TOKEN_COOKIE_SECURE=true
REFRESH_TOKEN_TTL=7d
SECRET=SOME-SECRET
STORAGE_AMAZON_BUCKET=MY-BUCKET
STORAGE_AMAZON_DRIVER=s3
STORAGE_AMAZON_KEY=SOME-KEY
STORAGE_AMAZON_REGION=eu-west-2
STORAGE_AMAZON_ROOT=
STORAGE_AMAZON_SECRET=SOME-SECRET
STORAGE_LOCATIONS=amazon

And here's the Dockerfile

# NOTE: Testing Only. DO NOT use this in production

ARG NODE_VERSION=16-alpine

FROM node:${NODE_VERSION}

WORKDIR /directus

COPY . .

RUN rm -rf node_modules

RUN apk add --update python3 make g++\
   && rm -rf /var/cache/apk/*

RUN npm install

WORKDIR /directus/api

CMD ["sh", "-c", "npm run start"]
EXPOSE 8055/tcp
EXPOSE 8056/tcp

I tried increasing the task memory to 2 vCPU and 8 GBs of memory. Still unresponsive.

Removing the package with the same configuration previously shown worked, even reducing memory and vCPUs to its minimal.

Do you have any idea why is it failing? Is there any specific configuration that's necessary in order to run this extension inside a serverless environment like AWS ECS Fargate? Thanks in advance, pal

@br41nslug
Copy link
Owner

br41nslug commented Feb 1, 2023

I do not personally use AWS but in any hosting setup you need to configure/allow for websocket proxying similar to http/https. In a regular NGINX or Apache setup those proxy configurations are straight forward but for AWS you'll have to consult their docs https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html

Sorry must have been sleepy and responded hastily this morning 😇 Can't say im sure why the extension would cause the cpu to spin up like that 🤔 Do you have any logs that could shed light on the situation?

@juliolugo96
Copy link
Author

Hi, man! No worries 😄 . AWS Cloudwatch didn't show me too much in the logs, just that my container was dropped:

 GET /admin/login 200 4ms
 GET /admin/login 200 4ms
 GET /admin/login 200 4ms
 GET /admin/login 200 4ms
 GET /admin/login 200 4ms

# No more health-checks during 6 minutes, then the container is dropped

npm ERR! path /directus
npm ERR! command failed
npm ERR! signal SIGINT
npm ERR! command sh -c -- npx directus start

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2023-02-01T21_04_18_424Z-debug-0.log

Reviewing the target groups, it seems to be working ok during a couple minutes, then it basically drops everything. After I checked the CloudWatch dashboard, I found CPU and memory peaks.

@br41nslug
Copy link
Owner

Could you try setting LOG_LEVEL=trace for the container? perhaps that will log a bit more in depth

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants