-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LibCore+RS: Make RequestServer (spin) faster #24265
Conversation
This will invoke the function F only if the provided Condition is met, otherwise the execution of the function F will be further deferred.
if (it == cache.end()) { | ||
auto [it, end] = cache.with_locked([&](auto& cache) { | ||
struct Result { | ||
decltype(cache.begin()) it; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
surely we have a typedef for ::Iterator :thinkies:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fewer words this way?
}); | ||
|
||
do { | ||
if (!m_work_queue.is_valid()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's an "invalid" work queue 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleted on the other side, it's...weird.
@@ -62,6 +194,14 @@ Messages::RequestServer::ConnectNewClientResponse ConnectionFromClient::connect_ | |||
return IPC::File::adopt_fd(socket_fds[1]); | |||
} | |||
|
|||
void ConnectionFromClient::enqueue(Work work) | |||
{ | |||
Core::deferred_invoke_if([this, work = move(work)] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so wait, here we just shuffle the jobs > the 256th around in vectors in the event loop until they can be pushed into the work queue? How is that better than allowing the work queue to expand, or storing them on a secondary queue in this object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e.g. a check to say
enqueue: if the extra queue has items or we fail to enqueue to the work queue, push to that secondary queue w/it locked.
worker_main: once queue is empty, try to move items from secondary queue to work queue w/secondary queue locked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have a threadsafe expanding queue, and I don't want to invent one.
I'd rather let the queue grow as well, but no infra
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re locked secondary queue, no point in using SSPCQ then, might as well just have a single locked queue.
Previously sharing a Timer between threads (or just handing its ownership to another thread) lead to a crash as timers are thread-specific. This commit makes it so we can handle timer mutation (i.e. just deletion) in a thread-safe and lockfree manner.
Previously RS handled all the requests in an event loop, leading to issues with connections being started in the middle of other connections being started (and potentially blowing up the stack), ultimately causing requests to be delayed because of other requests. This commit reworks the way we handle these (specifically starting connections) by first serialising the requests, and then performing them in multiple threads concurrently; which yields a significant loading performance and reliability increase.
Hello! One or more of the commit messages in this PR do not match the SerenityOS code submission policy, please check the |
I don't like where this is going. |
See individual commits; perf measurements coming soon ™️