Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Once weights are calculated, how long should a leela-zero move take to calculate? #82

Closed
nickpelling opened this issue Mar 1, 2018 · 1 comment

Comments

@nickpelling
Copy link

The Silver et al paper says that they allowed one minute (of TPU?) per chess move, which equated to about 80k positions being evaluated per second. So: once a good set of Leela-chess weights are in place, how long should a 2018 desktop PC (say, 16 cores @ 4GHz?), take to evaluate (80k x 60) = 4.8M positions? Is this figure already known?

And roughly how long would (say) an NVidia Tesla V100 take to do the same thing? (NVidia sometimes claims "30x" faster for this kind of task, but real world problems sometimes don't conform to expectations.)

Thanks! :-)

@Error323
Copy link
Collaborator

Error323 commented Mar 1, 2018

It depends on the size of the network. IIRC DeepMind was able to compute 80K pos/s on a node containing 4 TPUs with their 256x20 neural network. Our current network is 64x6 and I'm getting ~60K pos/s on my 1080Ti.

NVIDIA cards can obtain massive gains as was shown in #52. Who knows what the V100 can do.

Note that we have no idea what the exact properties of the DeepMind network are for chess, it seems unlikely that the head input planes have remained the same as in the AlphaGo Zero paper, see #47.

@Error323 Error323 closed this as completed Mar 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants