-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NUMA memory replication for NNUE weights #5285
Conversation
Some quite extensive testing here, showing a very big impact on the relevant systems I can access. I use a 1 minute search from the startpos as a reference and test in 3 configurations. 1 in which the default system numa policies for memory allocation are used (i.e. if you execute as a user from the commandline), and 2 in which we execute SF under numa control for some added understanding. First is a dual socket (8 numa domain) system, 256 threads, measured average nps (18 runs, 40GB hash, 60s search):
So 2.70x speedup for the average user (default setup master vs default setup numa), and more or less equivalent to what an experienced user with numactl would achieve. The gain over the explicit numactl is small (2%, might be noise). Second is a quad socket (4 numa domain), 288 threads system, measured average nps (10 runs, 40GB hash, 60s search):
here the speedup is 1.86x for the average user, and 1.13x compared to a user that explicitly uses numactl. So, these are impressive results. They might not translate to all system though, most likely needs to be memory BW limited (i.e. many cores, multiple sockets). |
So, some work on the PR still needed
|
6bd321f
to
91a511d
Compare
a841c27
to
c6ca84d
Compare
If I understand this duplicates the instances of the global weights for each numa node. Does it also make sure the accumulator cache refreshTable (stored in the Worker class) is allocated on the same numa node as the corresponding thread accessing that refreshTable? The reason I ask is because it looks like the TCEC slowdown started after the Finny caches were added. |
The |
Prior to this PR, we had quite some discussion on discord. The slowndown kicks in when the traffic between the numa domains becomes unsustainable, which depends strongly on the hardware. We did some measurements before this patch on dual socket epyc, with a 'proxy' numa SF, namely running a single multithreaded SF on all cores (and measuring the nodes searched in 100s), and running multiple SF (at proportionally reduced threads and hash), each bound to a subset of cores (using taskset) and summing their independently searched node count (replicated and distributed in this table below). As you can see, this slowdown has been ongoing for a while, basically all net arch changes increased the memory traffic. The accumulator caches did as well, and maybe that tipped the balance on the tcec hardware, but it is not the sole cause.
see also https://discord.com/channels/435943710472011776/813919248455827515/1240909395433619497 |
Some more discussion in discord, some vtune measurements to show obvious improvement. (sqrmax) Good improvement in numa BW usage |
fixed some ci issues and made some code style more consisten, also made |
on hardware where this patch is really beneficial passed 60+1 @ 256t 16000MB hash |
normal case, where the patch should not kick in, passes no regression: passed SMP STC passed STC However, some issues compiling (profile-building) the code showed up: The latter need to be understood before merging. |
great job, congrats Sopel ! |
Final touches made. Android excluded. NumaReplicatedBase no longer requires storing a unique ID. Made the condition for Leaving this commit separately for now for easier review. Will squash everything once there's approval. |
looks good to me, please fix include and squash, please with a commit message ready for merging including the testing results. I'll try a bit later today if android and the corner case we identified above are working as advertised, but otherwise this seems ready for a merge. |
245fb86
to
e837721
Compare
regarding this, can't see the actual error, so no way to fix it really, whatever it is IWYU wants me to include a non-standard header, what to do? |
a027ef9
to
f525045
Compare
does including |
it is included
|
a9799d6
to
842d5ca
Compare
if it keeps complaining about something silly keep the import with |
on discord:
|
d4f8fa2
to
0047103
Compare
sorry for making a bit of a mess, didn't realize I was pushing to the PR branch for testing
The bug was caused by empty NUMA nodes. Instead of handling them (tried it, gets messy, and still couldn't hunt some issues) I decided it would be better to completely remove empty NUMA nodes, as we don't guarantee the indices match anyway. I can't replicate the issue any longer. It used to std::exit due to inconsistent state whenever taskset was used to run on a specific node, causing node 0 to appear empty. |
…nsure execution on a specific NUMA node. This patch introduces NUMA memory replication, currently only utilized for the NNUE weights. Along with it comes all machinery required to identify NUMA nodes and bind threads to specific processors/nodes. It also comes with small changes to Thread and ThreadPool to allow easier execution of custom functions on the designated thread. Old thread binding (WinProcGroup) machinery is removed because it's incompatible with this patch. Small changes to unrelated parts of the code were made to ensure correctness, like some classes being made unmovable, raw pointers replaced with unique_ptr. etc. Windows 7 and Windows 10 is partially supported. Windows 11 is fully supported. Linux is fully supported, with explicit exclusion of Android. No additional dependencies. ----------------- A new UCI option `NumaPolicy` is introduced. It can take the following values: ``` system - gathers NUMA node information from the system (lscpu or windows api), for each threads binds it to a single NUMA node none - assumes there is 1 NUMA node, never binds threads auto - this is the default value, depends on the number of set threads and NUMA nodes, will only enable binding on multinode systems and when the number of threads reaches a threshold (dependent on node size and count) [[custom]] - // ':'-separated numa nodes // ','-separated cpu indices // supports "first-last" range syntax for cpu indices, for example '0-15,32-47:16-31,48-63' ``` Setting `NumaPolicy` forces recreation of the threads in the ThreadPool, which in turn forces the recreation of the TT. The threads are distributed among NUMA nodes in a round-robin fashion based on fill percentage (i.e. it will strive to fill all NUMA nodes evenly). Threads are bound to NUMA nodes, not specific processors, because that's our only requirement and the OS can schedule them better. Special care is made that maximum memory usage on systems that do not require memory replication stays as previously, that is, unnecessary copies are avoided. On linux the process' processor affinity is respected. This means that if you for example use taskset to restrict Stockfish to a single NUMA node then the `system` and `auto` settings will only see a single NUMA node (more precisely, the processors included in the current affinity mask) and act accordingly. ----------------- We can't ensure that a memory allocation takes place on a given NUMA node without using libnuma on linux, or using appropriate custom allocators on windows (https://learn.microsoft.com/en-us/windows/win32/memory/allocating-memory-from-a-numa-node), so to avoid complications the current implementation relies on first-touch policy. Due to this we also rely on the memory allocator to give us a new chunk of untouched memory from the system. This appears to work reliably on linux, but results may vary. MacOS is not supported, because AFAIK it's not affected, and implementation would be problematic anyway. Windows is supported since Windows 7 (https://learn.microsoft.com/en-us/windows/win32/api/processtopologyapi/nf-processtopologyapi-setthreadgroupaffinity). Until Windows 11/Server 2022 NUMA nodes are split such that they cannot span processor groups. This is because before Windows 11/Server 2022 it's not possible to set thread affinity spanning processor groups. The splitting is done manually in some cases (required after Windows 10 Build 20348). Since Windows 11/Server 2022 we can set affinites spanning processor group so this splitting is not done, so the behaviour is pretty much like on linux. Linux is supported, **without** libnuma requirement. `lscpu` is expected. ----------------- Passed 60+1 @ 256t 16000MB hash: https://tests.stockfishchess.org/tests/view/6654e443a86388d5e27db0d8 ``` LLR: 2.95 (-2.94,2.94) <0.00,10.00> Total: 278 W: 110 L: 29 D: 139 Ptnml(0-2): 0, 1, 56, 82, 0 ``` Passed SMP STC: https://tests.stockfishchess.org/tests/view/6654fc74a86388d5e27db1cd ``` LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 67152 W: 17354 L: 17177 D: 32621 Ptnml(0-2): 64, 7428, 18408, 7619, 57 ``` Passed STC: https://tests.stockfishchess.org/tests/view/6654fb27a86388d5e27db15c ``` LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 131648 W: 34155 L: 34045 D: 63448 Ptnml(0-2): 426, 13878, 37096, 14008, 416 ``` fixes official-stockfish#5253
0047103
to
5325384
Compare
e43a34d
to
a1086b7
Compare
a1086b7
to
59ae284
Compare
Allow for NUMA memory replication for NNUE weights. Bind threads to ensure execution on a specific NUMA node. This patch introduces NUMA memory replication, currently only utilized for the NNUE weights. Along with it comes all machinery required to identify NUMA nodes and bind threads to specific processors/nodes. It also comes with small changes to Thread and ThreadPool to allow easier execution of custom functions on the designated thread. Old thread binding (WinProcGroup) machinery is removed because it's incompatible with this patch. Small changes to unrelated parts of the code were made to ensure correctness, like some classes being made unmovable, raw pointers replaced with unique_ptr. etc. Windows 7 and Windows 10 is partially supported. Windows 11 is fully supported. Linux is fully supported, with explicit exclusion of Android. No additional dependencies. ----------------- A new UCI option `NumaPolicy` is introduced. It can take the following values: ``` system - gathers NUMA node information from the system (lscpu or windows api), for each threads binds it to a single NUMA node none - assumes there is 1 NUMA node, never binds threads auto - this is the default value, depends on the number of set threads and NUMA nodes, will only enable binding on multinode systems and when the number of threads reaches a threshold (dependent on node size and count) [[custom]] - // ':'-separated numa nodes // ','-separated cpu indices // supports "first-last" range syntax for cpu indices, for example '0-15,32-47:16-31,48-63' ``` Setting `NumaPolicy` forces recreation of the threads in the ThreadPool, which in turn forces the recreation of the TT. The threads are distributed among NUMA nodes in a round-robin fashion based on fill percentage (i.e. it will strive to fill all NUMA nodes evenly). Threads are bound to NUMA nodes, not specific processors, because that's our only requirement and the OS can schedule them better. Special care is made that maximum memory usage on systems that do not require memory replication stays as previously, that is, unnecessary copies are avoided. On linux the process' processor affinity is respected. This means that if you for example use taskset to restrict Stockfish to a single NUMA node then the `system` and `auto` settings will only see a single NUMA node (more precisely, the processors included in the current affinity mask) and act accordingly. ----------------- We can't ensure that a memory allocation takes place on a given NUMA node without using libnuma on linux, or using appropriate custom allocators on windows (https://learn.microsoft.com/en-us/windows/win32/memory/allocating-memory-from-a-numa-node), so to avoid complications the current implementation relies on first-touch policy. Due to this we also rely on the memory allocator to give us a new chunk of untouched memory from the system. This appears to work reliably on linux, but results may vary. MacOS is not supported, because AFAIK it's not affected, and implementation would be problematic anyway. Windows is supported since Windows 7 (https://learn.microsoft.com/en-us/windows/win32/api/processtopologyapi/nf-processtopologyapi-setthreadgroupaffinity). Until Windows 11/Server 2022 NUMA nodes are split such that they cannot span processor groups. This is because before Windows 11/Server 2022 it's not possible to set thread affinity spanning processor groups. The splitting is done manually in some cases (required after Windows 10 Build 20348). Since Windows 11/Server 2022 we can set affinites spanning processor group so this splitting is not done, so the behaviour is pretty much like on linux. Linux is supported, **without** libnuma requirement. `lscpu` is expected. ----------------- Passed 60+1 @ 256t 16000MB hash: https://tests.stockfishchess.org/tests/view/6654e443a86388d5e27db0d8 ``` LLR: 2.95 (-2.94,2.94) <0.00,10.00> Total: 278 W: 110 L: 29 D: 139 Ptnml(0-2): 0, 1, 56, 82, 0 ``` Passed SMP STC: https://tests.stockfishchess.org/tests/view/6654fc74a86388d5e27db1cd ``` LLR: 2.95 (-2.94,2.94) <-1.75,0.25> Total: 67152 W: 17354 L: 17177 D: 32621 Ptnml(0-2): 64, 7428, 18408, 7619, 57 ``` Passed STC: https://tests.stockfishchess.org/tests/view/6654fb27a86388d5e27db15c ``` LLR: 2.94 (-2.94,2.94) <-1.75,0.25> Total: 131648 W: 34155 L: 34045 D: 63448 Ptnml(0-2): 426, 13878, 37096, 14008, 416 ``` fixes official-stockfish#5253 closes official-stockfish#5285 No functional change
Official release version of Stockfish 17 Bench: 1484730 --- Stockfish 17 Today we have the pleasure to announce a new major release of Stockfish. As always, you can freely download it at https://stockfishchess.org/download and use it in the GUI of your choice. Don’t forget to join our Discord server[1] to get in touch with the community of developers and users of the project! *Quality of chess play* In tests against Stockfish 16, this release brings an Elo gain of up to 46 points[2] and wins up to 4.5 times more game pairs[3] than it loses. In practice, high-quality moves are now found in less time, with a user upgrading from Stockfish 14 being able to analyze games at least 6 times[4] faster with Stockfish 17 while maintaining roughly the same quality. During this development period, Stockfish won its 9th consecutive first place in the main league of the Top Chess Engine Championship (TCEC)[5], and the 24th consecutive first place in the main events (bullet, blitz, and rapid) of the Computer Chess Championship (CCC)[6]. *Update highlights* *Improved engine lines* This release introduces principal variations (PVs) that are more informative for mate and decisive table base (TB) scores. In both cases, the PV will contain all moves up to checkmate. For mate scores, the PV shown is the best variation known to the engine at that point, while for table base wins, it follows, based on the TB, a sequence of moves that preserves the game outcome to checkmate. *NUMA performance optimization* For high-end computers with multiple CPUs (typically a dual-socket architecture with 100+ cores), this release automatically improves performance with a `NumaPolicy` setting that optimizes non-uniform memory access (NUMA). Although typical consumer hardware will not benefit, speedups of up to 2.8x[7] have been measured. *Shoutouts* *ChessDB* During the past 1.5 years, hundreds of cores have been continuously running Stockfish to grow a database of analyzed positions. This chess cloud database[8] now contains well over 45 billion positions, providing excellent coverage of all openings and commonly played lines. This database is already integrated into GUIs such as En Croissant[9] and Nibbler[10], which access it through the public API. *Leela Chess Zero* Generally considered to be the strongest GPU engine, it continues to provide open data which is essential for training our NNUE networks. They released version 0.31.1[11] of their engine a few weeks ago, check it out! *Website redesign* Our website has undergone a redesign in recent months, most notably in our home page[12], now featuring a darker color scheme and a more modern aesthetic, while still maintaining its core identity. We hope you'll like it as much as we do! *Thank you* The Stockfish project builds on a thriving community of enthusiasts (thanks everybody!) who contribute their expertise, time, and resources to build a free and open-source chess engine that is robust, widely available, and very strong. We would like to express our gratitude for the 11k stars[13] that light up our GitHub project! Thank you for your support and encouragement – your recognition means a lot to us. We invite our chess fans to join the Fishtest testing framework[14] to contribute compute resources needed for development. Programmers can contribute to the project either directly to Stockfish[15] (C++), to Fishtest[16] (HTML, CSS, JavaScript, and Python), to our trainer nnue-pytorch[17] (C++ and Python), or to our website[18] (HTML, CSS/SCSS, and JavaScript). The Stockfish team [1] https://discord.gg/GWDRS3kU6R [2] https://tests.stockfishchess.org/tests/view/66d738ba9de3e7f9b33d159a [3] https://tests.stockfishchess.org/tests/view/66d738f39de3e7f9b33d15a0 [4] https://github.com/official-stockfish/Stockfish/wiki/Useful-data#equivalent-time-odds-and-normalized-game-pair-elo [5] https://en.wikipedia.org/wiki/Stockfish_(chess)#Top_Chess_Engine_Championship [6] https://en.wikipedia.org/wiki/Stockfish_(chess)#Chess.com_Computer_Chess_Championship [7] official-stockfish#5285 [8] https://chessdb.cn/queryc_en/ [9] https://encroissant.org/ [10] https://github.com/rooklift/nibbler [11] https://github.com/LeelaChessZero/lc0/releases/tag/v0.31.1 [12] https://stockfishchess.org/ [13] https://github.com/official-stockfish/Stockfish/stargazers [14] https://github.com/official-stockfish/fishtest/wiki/Running-the-worker [15] https://github.com/official-stockfish/Stockfish [16] https://github.com/official-stockfish/fishtest [17] https://github.com/official-stockfish/nnue-pytorch [18] https://github.com/official-stockfish/stockfish-web
This patch introduces NUMA memory replication, currently only utilized for the NNUE weights. Along with it comes all machinery required to identify NUMA nodes and bind threads to specific processors/nodes. It also comes with small changes to Thread and ThreadPool to allow easier execution of custom functions on the designated thread. Old thread binding (WinProcGroup) machinery is removed because it's incompatible with this patch. Small changes to unrelated parts of the code were made to ensure correctness, like some classes being made unmovable, raw pointers replaced with unique_ptr. etc.
Windows 7 and Windows 10 is partially supported. Windows 11 is fully supported. Linux is fully supported, with explicit exclusion of Android. No additional dependencies.
A new UCI option
NumaPolicy
is introduced. It can take the following values:Setting
NumaPolicy
forces recreation of the threads in the ThreadPool, which in turn forces the recreation of the TT.The threads are distributed among NUMA nodes in a round-robin fashion based on fill percentage (i.e. it will strive to fill all NUMA nodes evenly). Threads are bound to NUMA nodes, not specific processors, because that's our only requirement and the OS can schedule them better.
Special care is made that maximum memory usage on systems that do not require memory replication stays as previously, that is, unnecessary copies are avoided.
On linux the process' processor affinity is respected. This means that if you for example use taskset to restrict Stockfish to a single NUMA node then the
system
andauto
settings will only see a single NUMA node (more precisely, the processors included in the current affinity mask) and act accordingly.We can't ensure that a memory allocation takes place on a given NUMA node without using libnuma on linux, or using appropriate custom allocators on windows (https://learn.microsoft.com/en-us/windows/win32/memory/allocating-memory-from-a-numa-node), so to avoid complications the current implementation relies on first-touch policy. Due to this we also rely on the memory allocator to give us a new chunk of untouched memory from the system. This appears to work reliably on linux, but results may vary.
MacOS is not supported, because AFAIK it's not affected, and implementation would be problematic anyway.
Windows is supported since Windows 7 (https://learn.microsoft.com/en-us/windows/win32/api/processtopologyapi/nf-processtopologyapi-setthreadgroupaffinity). Until Windows 11/Server 2022 NUMA nodes are split such that they cannot span processor groups. This is because before Windows 11/Server 2022 it's not possible to set thread affinity spanning processor groups. The splitting is done manually in some cases (required after Windows 10 Build 20348). Since Windows 11/Server 2022 we can set affinites spanning processor group so this splitting is not done, so the behaviour is pretty much like on linux.
Linux is supported, without libnuma requirement.
lscpu
is expected.Thanks to @Disservin for making this codebase actually nice to work on. I did not swear once.