You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(c) during meta-testing, we create 5 augmented samples from each support image to alleviate the data insufficiency problem, and using these augmented samples to train
the linear classifier
but isn't the point to see what one can do with few shots? This puzzles me.
If you have 5-shots, it's still 5 unique images used. n_aug_support_samples=5 means we augment each image 5 times. It's just data augmentation. The total information you used is still the same.
See table 5 in the paper. It helps a bit, but not dramatically.
Why do you have this?
rfs/eval_fewshot.py
Line 48 in f8c837b
my impression was that in when one does an n-way, k-shot task one only uses k-shots -- but this number increases the shots. Wouldn't this be cheating?
No aug in support set
perhaps this is why I can't reproduce and the values reported in the paper are larger than mine -- even when I use the rfs mini imagenet checkpoint.
The text was updated successfully, but these errors were encountered: