You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, my GPU is Tesla V100-32G, when I use 508x508 tile shape as you did in the tutorial video, the speed is somewhat well, but when I use 1500x1500 tile shape, the estimated memory is about 18G, less than my GPU's limit, but the speed is quite slow. I'm not familar with caffe, so I thought larger tile shape is good for accelerating the finetuning, is it right?
The text was updated successfully, but these errors were encountered:
Factor 10 slower is expected with factor 10 larger input, everything above is overhead from data augmentation and transfer. The number of iterations may be affected by input shape but I would not in general say the bigger the better. I usually train with relatively small tiles and batch size one to increase randomness. Curves become wiggly but output is quite robust.
Hello, my GPU is Tesla V100-32G, when I use 508x508 tile shape as you did in the tutorial video, the speed is somewhat well, but when I use 1500x1500 tile shape, the estimated memory is about 18G, less than my GPU's limit, but the speed is quite slow. I'm not familar with caffe, so I thought larger tile shape is good for accelerating the finetuning, is it right?
The text was updated successfully, but these errors were encountered: