You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(This is related to but not identical to issue #2 .)
Cropping is particularly undesirable on very small images like 64x64 where it may delete a lot of the image (especially when the images come pre-centered and cropped already). Currently, you cannot run dcgan.torch with no cropping despite the configurable arguments like loadSize=64 fineSize=64 suggesting that should be possible. This is not due to design but a bug in the cropping code in data/donkey_folder.lua, it seems; said in issue #2:
Right now, loadSize has to be greater than fineSize (because of a bug in the cropping logic). So it's okay to have loadSize=65 fineSize=64 th main.lua
I messed around some with the responsible trainHook and I think the bug can be fixed by simply checking for the case where the original H/W are greater than the fineSize value and if they aren't, feeding 0s into the crop function, so the new version would look like this:
-- do random crop if fineSize/sampleSize is configured to be smaller than NN's input dimensions, loadSize
local iW = input:size(3)
local iH = input:size(2)
local oW = sampleSize[2]
local oH = sampleSize[2]
if (iW > oW) then
w1 = math.ceil(torch.uniform(1e-2, iW-oW))
else
w1 = 0
end
if (iH > oH) then
h1 = math.ceil(torch.uniform(1e-2, iH-oH))
else
h1 = 0
end
local out = image.crop(input, w1, h1, w1 + oW, h1 + oH)
assert(out:size(2) == oW)
assert(out:size(3) == oH)
Or to diff it:
diff --git a/data/donkey_folder.lua b/data/donkey_folder.lua
index 3a82393..5248f4e 100644
--- a/data/donkey_folder.lua
+++ b/data/donkey_folder.lua
@@ -52,17 +52,26 @@ local mean,std
local trainHook = function(self, path)
collectgarbage()
local input = loadImage(path)
+
+ -- do random crop if fineSize/sampleSize is configured to be smaller than NN's input dimensions, loadSize
local iW = input:size(3)
local iH = input:size(2)
-
- -- do random crop
- local oW = sampleSize[2];
+ local oW = sampleSize[2]
local oH = sampleSize[2]
- local h1 = math.ceil(torch.uniform(1e-2, iH-oH))
- local w1 = math.ceil(torch.uniform(1e-2, iW-oW))
+ if (iW > oW) then
+ w1 = math.ceil(torch.uniform(1e-2, iW-oW))
+ else
+ w1 = 0
+ end
+ if (iH > oH) then
+ h1 = math.ceil(torch.uniform(1e-2, iH-oH))
+ else
+ h1 = 0
+ end
local out = image.crop(input, w1, h1, w1 + oW, h1 + oH)
assert(out:size(2) == oW)
assert(out:size(3) == oH)
+
-- do hflip with probability 0.5
if torch.uniform() > 0.5 then out = image.hflip(out); end
out:mul(2):add(-1) -- make it [0, 1] -> [-1, 1]
This seems to work both in the 64x64px default version and the 128x128px fork, eg
And looking at the displayed training sample images in the display server, they don't look cropped like before. So although I haven't run anything to completion, I think that fix works.
The text was updated successfully, but these errors were encountered:
(This is related to but not identical to issue #2 .)
Cropping is particularly undesirable on very small images like 64x64 where it may delete a lot of the image (especially when the images come pre-centered and cropped already). Currently, you cannot run dcgan.torch with no cropping despite the configurable arguments like
loadSize=64 fineSize=64
suggesting that should be possible. This is not due to design but a bug in the cropping code indata/donkey_folder.lua
, it seems; said in issue #2:I messed around some with the responsible
trainHook
and I think the bug can be fixed by simply checking for the case where the original H/W are greater than thefineSize
value and if they aren't, feeding 0s into the crop function, so the new version would look like this:Or to diff it:
This seems to work both in the 64x64px default version and the 128x128px fork, eg
And looking at the displayed training sample images in the display server, they don't look cropped like before. So although I haven't run anything to completion, I think that fix works.
The text was updated successfully, but these errors were encountered: