-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max execution time of request on http server #5183
Comments
I used both Swoole & OpenSwoole, and can confirm that both have the same behavior regarding max_execution_time, ex: but, +1 on this request, I believe this should be handled in Swoole. |
Problem is, you might have to clean up some connections and whatnot, so if you just stop executing code after the timeout happens in the next context change, you end up with memory leaks and broken code branches and whatnot. In other words, the $killRequest idea is good, but your code still needs to deal with the cleanup and whatnot. |
Execute your code in coroutine. $http->on(
Constant::EVENT_REQUEST,
function (Request $request, Response $response) {
echo 'Http request started' . "\n";
$killRequest = false;
$timerId = Timer::after(5000, function () use ($response, &$killRequest) {
$response->status(408);
$response->end('Timeout');
echo 'Timeout has been sent' . "\n";
$killRequest = true;
});
go(function() use (&$killRequest, $timerId) {
echo 'Step 1' . "\n";
sleep(2); // code that takes 2s
if ($killRequest) return;
echo 'Step 2' . "\n";
sleep(2); // code that takes 2s
if ($killRequest) return;
...
Timer::clear($timerId);
}
echo 'We reached the end' . "\n";
$response->status(200);
$response->end('The end');
}
); |
You can't really do that, since you might have 10001 more levels/coroutines/context switches in your request function, and you would be to keep track and check that |
If you are really worried about CPU usage and memory usage, kill the http request worker process and have it re-create itself automatically. $timerId = Timer::after(5000, function () use ($response, &$killRequest) {
$response->status(408);
$response->end('Timeout');
echo 'Timeout has been sent' . "\n";
exec('kill -9 ' . getmypid());
//$killRequest = true;
}); And if there is too much delay, it appears that the application design itself is flawed. pcntl_async_signals(TRUE);
try {
pcntl_alarm(1);
pcntl_signal(SIGALRM, function() {throw new Exception;});
your_worried_function();
} catch (Exception $e) {
echo "catch\n";
return;
} |
I know it has been asked some times already :
#4594
#3078
How can we implement a real max_execution_time that will free the worker.
One proposed solution was that :
So indeed the client will receive a 408 timeout.
But the worker is still working and doing things right ? (things that will never reach the end user)
So another implementation could be :
The print will be
OK that's fine, but that just impossible to do, because we are inside classes doing stuff.
I'm concerned about CPU usage, RAM usage and especially about the workers.
From my test with only 1 worker, if somehow I don't use the database proxy classes, the worker can only treat one request at a time.
Meaning the first request is processing, the second is waiting.
I did the same test with database proxy classes, the second request doesn't wait however.
But I'm still concerned, if somehow all workers are blocked and the server can't receive any new request.
Is this an option that could be implemented in swoole internals ? What do you think ?
(OpenSwoole implemented it but I'm not sure it really works : openswoole/ext-openswoole#136)
The text was updated successfully, but these errors were encountered: