-
Notifications
You must be signed in to change notification settings - Fork 60
[fs_connector][feat]: Add multithreaded worker pool (thread_pool) #178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… fs connector Signed-off-by: Kfir Toledo <[email protected]>
da7dc1e to
deae225
Compare
orozery
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
General comment:
I see very long lines throughout, which is hard to read and looks not so good :)
I think we need to add some kind of a style linter to enforce on all our codebase.
| std::unique_lock<std::mutex> lock(queue_mutex); | ||
|
|
||
| // Reject new tasks if the pool is shutting down | ||
| if (stop) throw std::runtime_error("enqueue on stopped ThreadPool"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to reject without throwing an exception?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| DEBUG_PRINT("IO thread " << i << " attached to preallocated pinned buffer " << (t_pinned_buffer.size / (1024 * 1024)) | ||
| << " MB"); | ||
| } else { | ||
| std::cerr << "[WARN] IO thread " << i << " has no preallocated pinned buffer\n"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is warning enough? shouldn't we fail?
Also, if any thread crashes, should not the entire pool terminate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn’t fail, because a missing preallocated buffer only affects performance, not correctness. Other threads may still have buffers, and this thread will allocate its own buffer on first use. I updated the warning comment to reflect that behavior.
| // Round-robin CPUs within the NUMA node | ||
| int cpu_id = local_cpus[i % local_cpus.size()]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this cause performance issues if threads are not evenly split across CPUs?
@dannyharnik
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code spreads threads across the NUMA-local CPUs using round-robin assignment (and falls back to all CPUs if none are found). It should keep load balanced, but I can add a TODO to evaluate the impact more deeply and consider relaxing the NUMA restriction if needed.
I added a linter in this PR "268dcf1" |
Signed-off-by: Kfir Toledo <[email protected]>
This PR introduces the multithreaded worker pool (thread_pool) used by the filesystem offloading connector.
Included in this PR: