-
-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convolutional network slower than tensorflow on CPU #2350
Comments
That's not the intended use for function train_loop(model, optimizer, train_loader, test_loader; epochs=5)
for epoch ∈ 1:epochs
iter = tqdm(train_loader)
total = 0
corrects = 0
for (X, Y) ∈ iter
grads = Flux.gradient(model) do m
predicted = m(X)
ignore() do
b_size = size(X)[end]
corrects += sum(onecold(predicted, 0:9) .== onecold(Y, 0:9)) # edit, labels is Y
total += b_size
end
logitcrossentropy(predicted, Y)
end
optimizer, model = Flux.Optimise.update!(optimizer, model, grads[1]) # edit, fixed [0]
set_postfix(iter, accuracy=corrects / total)
end
val_accuracy = accuracy(model, test_loader)
@info "Epoch $epoch/5 | Accuracy : $val_accuracy"
end
end |
i did that already, same speed, even a little slower |
My guess is that this NNlib's CPU implementations of Conv etc. being sub-optimal. That's the target of e.g. FluxML/NNlib.jl#540, and seeing whether that PR speeds up this example might be helpful. (And if it does, finding a way to push that PR forwards). Otherwise, isolating exactly which operations are slower would be more helpful than overall times. Xref earlier issue about the same thing #2300 |
will there be any updates? |
Have you seen the linked PR at FluxML/NNlib.jl#540? Other than contributing performance improvements to NNlib itself, best thing would be to do some benchmarking of what the bottlenecks in the Julia code are with a profiler. Ideally you could narrow it down to 1-2 types of layers which could be compared directly against their equivalents in PyTorch. |
whatever it is, it's related to backward path, feed forward path is in flux is already faster than pytorch, or same speed at least |
That's why I asked to narrow it down. If you can find which specific layers are slower on the backwards path and provide a MWE demonstrating that, then we have something to work with. |
here are CPU tests FeedForward Flux: using Flux
using BenchmarkTools
m = Conv((3, 3), 1 => 16; stride=(2, 2), pad=1)
A = Float32.(randn(28, 28, 1, 100))
# compile for the first time
m(A)
@btime m(A) 753.000 μs (76 allocations: 2.44 MiB) FeedForward Pytorch: import torch
import torch.nn as nn
m = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), stride=(2, 2), padding=1)
A = torch.randn((100, 1, 28, 28))
%timeit m(A) 172 µs ± 7.32 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) |
Flux is Significantly slower(almost 6 times) than Pytorch on CPU!!! |
|
@aminaqi that's a different issue, namely FluxML/NNlib.jl#234. As mentioned in that issue and the linked Discourse discussion, make sure you're starting Julia with multiple threads and using MKL for a proper apples-to-apples comparison with PyTorch. For this issue, it's not clear where the exact slowdown(s) come from. What I'm sure of is that it can't be solely the conv forward pass, which is what you're benchmarking. PS. it looks like the formatting on your comments got messed up? Every one quotes the entirety of the one before it and it probably shouldn't. |
i've started julia with 6 threads, anyway even if i start julia with multi threads, it's still significantly slower than pytorch because that's only feedforward, we have a slowdown on backward too, which makes flux to be 10 times slower than pytorch or tensorflow |
Are you seeing Julia be 10x slower on the forward and backwards pass, for CNNs and RNNs, against PyTorch and TensorFlow? I'm pretty sure we are slower on all of those, but 10x for all of them would not be expected. If that's really what you're seeing, I'd recommend starting a Discourse thread with some MWEs for the various benchmarks and linking back to that here. It's possible that Flux itself is only a small part of the issue there, and Discourse will allow more folks to weigh in on what other parts of your code may be contributing (only Flux maintainers really follow this issue tracker). Either way, the performance gap being discussed in this issue already has a reasonable benchmark. It just needs to be narrowed down to a couple of layers and/or profiled so we can see what the bottlenecks are to take action on them. If nobody has bandwidth to do that, then I'm not sure there's much else to discuss here. |
The text was updated successfully, but these errors were encountered: