You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work, I have a small question related with calculating flops
In paper Table 1
cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.8110^8
cifar100 DenseNet-40 (40% Pruned), model FLOPS is 3.7110^8
Since cifar100 has 100 classes , while cifar10 has 10 classes
Why is the flops in cifar10 higher than flops in cifar100 in the same model
Thanks in advance
The text was updated successfully, but these errors were encountered:
Because these are two different models, and the algorithm prunes different part of the networks. Even if you prune a fixed amount of channels (40% in this case), FLOPs will be dependent on where you prune. For example, if you prune early layers more, you'll reduce more FLOPs since they have larger activation maps.
Thanks for your great work, I have a small question related with calculating flops
In paper Table 1
cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.8110^8
cifar100 DenseNet-40 (40% Pruned), model FLOPS is 3.7110^8
Since cifar100 has 100 classes , while cifar10 has 10 classes
Why is the flops in cifar10 higher than flops in cifar100 in the same model
Thanks in advance
The text was updated successfully, but these errors were encountered: