diff --git a/README.md b/README.md index fe1ed1c..5ce1b91 100755 --- a/README.md +++ b/README.md @@ -57,19 +57,7 @@ For Slurm, please see the [SlurmClusterManager.jl](https://github.com/JuliaParal ### Using `LocalAffinityManager` (for pinning local workers to specific cores) -- Linux only feature. -- Requires the Linux `taskset` command to be installed. -- Usage : `addprocs(LocalAffinityManager(;np=CPU_CORES, mode::AffinityMode=BALANCED, affinities=[]); kwargs...)`. - -where - -- `np` is the number of workers to be started. -- `affinities`, if specified, is a list of CPU IDs. As many workers as entries in `affinities` are launched. Each worker is pinned -to the specified CPU ID. -- `mode` (used only when `affinities` is not specified, can be either `COMPACT` or `BALANCED`) - `COMPACT` results in the requested number -of workers pinned to cores in increasing order, For example, worker1 => CPU0, worker2 => CPU1 and so on. `BALANCED` tries to spread -the workers. Useful when we have multiple CPU sockets, with each socket having multiple cores. A `BALANCED` mode results in workers -spread across CPU sockets. Default is `BALANCED`. +See [`docs/local_affinity.md`](docs/local_affinity.md) ### Using `ElasticManager` (dynamically adding workers to a cluster) diff --git a/docs/local_affinity.md b/docs/local_affinity.md new file mode 100644 index 0000000..6c6bf07 --- /dev/null +++ b/docs/local_affinity.md @@ -0,0 +1,15 @@ +# Using `LocalAffinityManager` (for pinning local workers to specific cores) + +- Linux only feature. +- Requires the Linux `taskset` command to be installed. +- Usage : `addprocs(LocalAffinityManager(;np=CPU_CORES, mode::AffinityMode=BALANCED, affinities=[]); kwargs...)`. + +where + +- `np` is the number of workers to be started. +- `affinities`, if specified, is a list of CPU IDs. As many workers as entries in `affinities` are launched. Each worker is pinned +to the specified CPU ID. +- `mode` (used only when `affinities` is not specified, can be either `COMPACT` or `BALANCED`) - `COMPACT` results in the requested number +of workers pinned to cores in increasing order, For example, worker1 => CPU0, worker2 => CPU1 and so on. `BALANCED` tries to spread +the workers. Useful when we have multiple CPU sockets, with each socket having multiple cores. A `BALANCED` mode results in workers +spread across CPU sockets. Default is `BALANCED`.