-
-
Notifications
You must be signed in to change notification settings - Fork 934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secure Jsonnet Pool context deadline exceeded #3923
Comments
So this 1s seems hardcoded: https://github.com/ory/x/pull/755/files#diff-e8a7ac45033a2903ea00a56a5e7957e57ea7b60a3e9e7d4f23666f375fe2eec5R185 Now why does the machine take 1s to process it is kinda of the question. I am looking at my logs when using the previous versions and it doesn't take 1s to process the whole thing:
|
I think it would be nice to have the option to disable the pool and revert back to the older in-process VM? @alnr |
It's weird that before it was still spawning a new process per execution so in theory a pool should just improve performances and it discards the idea that the platform is somehow not able to spawn processes.
I think that option 3 is likely since this was happening in staging without any load (only one user trying to register). I am not sure how to get logs for this worker process though. |
Not sure the process pool is actually active in the OSS release. Are you seeing persistent |
We've been running the process pool in production for a while without any regressions but with 100x better performance. |
Are you running your process with |
I reverted but I will take a look, I am sure it is active since the "spawn process every time" option doesnt have a timeout AFAIK.
Interesting, it was very easy to trigger for us but its a super small machine since its a staging thing. What I am not sure is why it would work with a spawned thread but it would hang for 1s with the pool. Unless before it was running in-process?
No, but its a 0.25 CPU machine on |
Preflight checklist
Ory Network Project
No response
Describe the bug
We are trying to upgrade from commit https://github.com/ory/kratos/tree/0c5ea9bf735a67ef35011ba41d7f98afc6f8e118 to v1.1.
We get this error:
The timeout is weird because it timeouts after a bit more than one second, our jsonnet is very simple:
It didn't fail locally on macOS, but it failed on our cloud provider so this suggests that there is something weird going with the new pool/vm system that was introduced in caido/kratos@master...ory:kratos:master#diff-ab967ab1a2f3a1b769106eeb7bfe892ef0e81d1d27811fa15be08e6749feee1fR58-R77
The cloud machine is small, but I would not expect the jsonnet to take 1s to resolve. So maybe the timeout is warranted but somehow the pool is stuck.
Reproducing the bug
function(ctx) { identity: ctx.identity }
Relevant log output
Relevant configuration
No response
Version
v1.1
On which operating system are you observing this issue?
Linux
In which environment are you deploying?
Docker
Additional Context
No response
The text was updated successfully, but these errors were encountered: