You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Silver et al paper says that they allowed one minute (of TPU?) per chess move, which equated to about 80k positions being evaluated per second. So: once a good set of Leela-chess weights are in place, how long should a 2018 desktop PC (say, 16 cores @ 4GHz?), take to evaluate (80k x 60) = 4.8M positions? Is this figure already known?
And roughly how long would (say) an NVidia Tesla V100 take to do the same thing? (NVidia sometimes claims "30x" faster for this kind of task, but real world problems sometimes don't conform to expectations.)
Thanks! :-)
The text was updated successfully, but these errors were encountered:
It depends on the size of the network. IIRC DeepMind was able to compute 80K pos/s on a node containing 4 TPUs with their 256x20 neural network. Our current network is 64x6 and I'm getting ~60K pos/s on my 1080Ti.
NVIDIA cards can obtain massive gains as was shown in #52. Who knows what the V100 can do.
Note that we have no idea what the exact properties of the DeepMind network are for chess, it seems unlikely that the head input planes have remained the same as in the AlphaGo Zero paper, see #47.
The Silver et al paper says that they allowed one minute (of TPU?) per chess move, which equated to about 80k positions being evaluated per second. So: once a good set of Leela-chess weights are in place, how long should a 2018 desktop PC (say, 16 cores @ 4GHz?), take to evaluate (80k x 60) = 4.8M positions? Is this figure already known?
And roughly how long would (say) an NVidia Tesla V100 take to do the same thing? (NVidia sometimes claims "30x" faster for this kind of task, but real world problems sometimes don't conform to expectations.)
Thanks! :-)
The text was updated successfully, but these errors were encountered: