-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update NNUE architecture to SFNNv6 with larger L1 size of 1536 #4593
Closed
linrock
wants to merge
1
commit into
official-stockfish:master
from
linrock:L1-1536-nn-8d69132723e2.nnue
Closed
Update NNUE architecture to SFNNv6 with larger L1 size of 1536 #4593
linrock
wants to merge
1
commit into
official-stockfish:master
from
linrock:L1-1536-nn-8d69132723e2.nnue
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Created by training a new net from scratch with L1 size increased from 1024 to 1536. Thanks to Vizvezdenec for the idea of exploring larger net sizes after recent training data improvements. A new net was first trained with lambda 1.0 and constant LR 8.75e-4. Then a strong net from a later epoch in the training run was chosen for retraining with start-lambda 1.0 and initial LR 4.375e-4 decaying with gamma 0.995. Retraining was performed a total of 3 times, for this 4-step process: 1. 400 epochs, lambda 1.0 on filtered T77+T79 v6 deduplicated data 2. 800 epochs, end-lambda 0.75 on T60T70wIsRightFarseerT60T74T75T76.binpack 3. 800 epochs, end-lambda 0.75 and early-fen-skipping 28 on the master dataset 4. 800 epochs, end-lambda 0.7 and early-fen-skipping 28 on the master dataset In the training sequence that reached the new nn-8d69132723e2.nnue net, the epochs used for the 3x retraining runs were: 1. epoch 379 trained on T77T79-filter-v6-dd.min.binpack 2. epoch 679 trained on T60T70wIsRightFarseerT60T74T75T76.binpack 3. epoch 799 trained on the master dataset For training from scratch: python3 easy_train.py \ --experiment-name new-L1-1536-T77T79-filter-v6dd \ --training-dataset /data/T77T79-filter-v6-dd.min.binpack \ --max_epoch 400 \ --lambda 1.0 \ --start-from-engine-test-net False \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/Stockfish/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM Retraining commands were similar to each other. For the 3rd retraining run: python3 easy_train.py \ --experiment-name L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-Re3-sameData \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd.binpack \ --early-fen-skipping 28 \ --max_epoch 800 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-from-engine-test-net False \ --start-from-model /data/L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-nn-epoch799.nnue \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM The T77+T79 data used is a subset of the master dataset available at: https://robotmoon.com/nnue-training-data/ T60T70wIsRightFarseerT60T74T75T76.binpack is available at: https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024): nn-epoch759.nnue : 26.9 +/- 1.6 Failed STC https://tests.stockfishchess.org/tests/view/64742485d29264e4cfa75f97 LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 13728 W: 3588 L: 3829 D: 6311 Ptnml(0-2): 71, 1661, 3610, 1482, 40 Failing LTC https://tests.stockfishchess.org/tests/view/64752d7c4a36543c4c9f3618 LLR: -1.91 (-2.94,2.94) <0.50,2.50> Total: 35424 W: 9522 L: 9603 D: 16299 Ptnml(0-2): 24, 3579, 10585, 3502, 22 Passed VLTC 180+1.8 https://tests.stockfishchess.org/tests/view/64752df04a36543c4c9f3638 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 47616 W: 13174 L: 12863 D: 21579 Ptnml(0-2): 13, 4261, 14952, 4566, 16 Passed VLTC SMP 60+0.6 th 8 https://tests.stockfishchess.org/tests/view/647446ced29264e4cfa761e5 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 19942 W: 5694 L: 5451 D: 8797 Ptnml(0-2): 6, 1504, 6707, 1749, 5 bench 2222567
linrock
added a commit
to linrock/nnue-pytorch
that referenced
this pull request
May 30, 2023
This was used to train a larger master net in: official-stockfish/Stockfish#4593
linrock
added a commit
to linrock/nnue-pytorch
that referenced
this pull request
May 30, 2023
This was used to train a larger master net nn-8d69132723e2.nnue in: official-stockfish/Stockfish#4593
locutus2
referenced
this pull request
in cj5716/Stockfish
May 31, 2023
Reintroduce nnue pawn scaling (tuned 60k)
Craftyawesome
pushed a commit
to Craftyawesome/Stockfish
that referenced
this pull request
Jun 2, 2023
Created by training a new net from scratch with L1 size increased from 1024 to 1536. Thanks to Vizvezdenec for the idea of exploring larger net sizes after recent training data improvements. A new net was first trained with lambda 1.0 and constant LR 8.75e-4. Then a strong net from a later epoch in the training run was chosen for retraining with start-lambda 1.0 and initial LR 4.375e-4 decaying with gamma 0.995. Retraining was performed a total of 3 times, for this 4-step process: 1. 400 epochs, lambda 1.0 on filtered T77+T79 v6 deduplicated data 2. 800 epochs, end-lambda 0.75 on T60T70wIsRightFarseerT60T74T75T76.binpack 3. 800 epochs, end-lambda 0.75 and early-fen-skipping 28 on the master dataset 4. 800 epochs, end-lambda 0.7 and early-fen-skipping 28 on the master dataset In the training sequence that reached the new nn-8d69132723e2.nnue net, the epochs used for the 3x retraining runs were: 1. epoch 379 trained on T77T79-filter-v6-dd.min.binpack 2. epoch 679 trained on T60T70wIsRightFarseerT60T74T75T76.binpack 3. epoch 799 trained on the master dataset For training from scratch: python3 easy_train.py \ --experiment-name new-L1-1536-T77T79-filter-v6dd \ --training-dataset /data/T77T79-filter-v6-dd.min.binpack \ --max_epoch 400 \ --lambda 1.0 \ --start-from-engine-test-net False \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/Stockfish/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM Retraining commands were similar to each other. For the 3rd retraining run: python3 easy_train.py \ --experiment-name L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-Re3-sameData \ --training-dataset /data/leela96-dfrc99-v2-T60novdecT80juntonovjanfebT79aprmayT78jantosepT77dec-v6dd.binpack \ --early-fen-skipping 28 \ --max_epoch 800 \ --start-lambda 1.0 \ --end-lambda 0.7 \ --lr 4.375e-4 \ --gamma 0.995 \ --start-from-engine-test-net False \ --start-from-model /data/L1-1536-T77T79-v6dd-Re1-LeelaFarseer-Re2-masterDataset-nn-epoch799.nnue \ --engine-test-branch linrock/Stockfish/L1-1536 \ --nnue-pytorch-branch linrock/nnue-pytorch/misc-fixes-L1-1536 \ --tui False \ --gpus "0," \ --seed $RANDOM The T77+T79 data used is a subset of the master dataset available at: https://robotmoon.com/nnue-training-data/ T60T70wIsRightFarseerT60T74T75T76.binpack is available at: https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024): nn-epoch759.nnue : 26.9 +/- 1.6 Failed STC https://tests.stockfishchess.org/tests/view/64742485d29264e4cfa75f97 LLR: -2.94 (-2.94,2.94) <0.00,2.00> Total: 13728 W: 3588 L: 3829 D: 6311 Ptnml(0-2): 71, 1661, 3610, 1482, 40 Failing LTC https://tests.stockfishchess.org/tests/view/64752d7c4a36543c4c9f3618 LLR: -1.91 (-2.94,2.94) <0.50,2.50> Total: 35424 W: 9522 L: 9603 D: 16299 Ptnml(0-2): 24, 3579, 10585, 3502, 22 Passed VLTC 180+1.8 https://tests.stockfishchess.org/tests/view/64752df04a36543c4c9f3638 LLR: 2.95 (-2.94,2.94) <0.50,2.50> Total: 47616 W: 13174 L: 12863 D: 21579 Ptnml(0-2): 13, 4261, 14952, 4566, 16 Passed VLTC SMP 60+0.6 th 8 https://tests.stockfishchess.org/tests/view/647446ced29264e4cfa761e5 LLR: 2.94 (-2.94,2.94) <0.50,2.50> Total: 19942 W: 5694 L: 5451 D: 8797 Ptnml(0-2): 6, 1504, 6707, 1749, 5 closes official-stockfish#4593 bench 2222567
linrock
added a commit
to linrock/nnue-pytorch
that referenced
this pull request
Jul 1, 2023
This was used to train a larger master net (nn-1b951f8b449d.nnue) official-stockfish/Stockfish#4593
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by training a new net from scratch with L1 size increased from 1024 to 1536. Thanks to Vizvezdenec for the idea of exploring larger net sizes after recent training data improvements.
A new net was first trained with lambda 1.0 and constant LR 8.75e-4. Then a strong net from a later epoch in the training run was chosen for retraining with start-lambda 1.0 and initial LR 4.375e-4 decaying with gamma 0.995. Retraining was performed a total of 3 times, for this 4-step process:
In the training sequence that reached the new nn-8d69132723e2.nnue net, the epochs used for the 3x retraining runs were:
For training from scratch:
Retraining commands were similar to each other. For the 3rd retraining run:
The T77+T79 data used is a subset of the master dataset available at:
https://robotmoon.com/nnue-training-data/
T60T70wIsRightFarseerT60T74T75T76.binpack is available at:
https://drive.google.com/drive/folders/1S9-ZiQa_3ApmjBtl2e8SyHxj4zG4V8gG
Local elo at 25k nodes per move vs. nn-e1fb1ade4432.nnue (L1 size 1024):
nn-epoch759.nnue : 26.9 +/- 1.6
Failed STC
https://tests.stockfishchess.org/tests/view/64742485d29264e4cfa75f97
LLR: -2.94 (-2.94,2.94) <0.00,2.00>
Total: 13728 W: 3588 L: 3829 D: 6311
Ptnml(0-2): 71, 1661, 3610, 1482, 40
Failing LTC
https://tests.stockfishchess.org/tests/view/64752d7c4a36543c4c9f3618
LLR: -1.91 (-2.94,2.94) <0.50,2.50>
Total: 35424 W: 9522 L: 9603 D: 16299
Ptnml(0-2): 24, 3579, 10585, 3502, 22
Passed VLTC 180+1.8
https://tests.stockfishchess.org/tests/view/64752df04a36543c4c9f3638
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 47616 W: 13174 L: 12863 D: 21579
Ptnml(0-2): 13, 4261, 14952, 4566, 16
Passed VLTC SMP 60+0.6 th 8
https://tests.stockfishchess.org/tests/view/647446ced29264e4cfa761e5
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 19942 W: 5694 L: 5451 D: 8797
Ptnml(0-2): 6, 1504, 6707, 1749, 5
bench 2222567