Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

massively expand sample size for training #3

Closed
3 tasks done
bodokaiser opened this issue Mar 4, 2017 · 3 comments
Closed
3 tasks done

massively expand sample size for training #3

bodokaiser opened this issue Mar 4, 2017 · 3 comments

Comments

@bodokaiser
Copy link
Owner

bodokaiser commented Mar 4, 2017

With the current setting we have convergence after about 3-4 epochs even on our most basic models.

Ideas:

  • sample slices from all volume axes
  • lower patch threshold
  • use smaller patch size
  • apply data augmentation
  • perform (leave one out) cross validation

If we go down with patch size we will need to additionally use mrtous.dataset.MNIBITENative for image preview during training.

@bodokaiser
Copy link
Owner Author

bodokaiser commented Mar 6, 2017

Alternative idea (not directly correlated with this issue but I want to write this down somewhere):
Add torchvision.transforms.RandomCrop to mrtous.dataset.MNIBITE. This would give us uniform image dimensions so that we can batch samples and save us the patch preprocessing. Problems with this approach I see so far are that we need to crop both input and target with same random parameters. This particular feature is on the go see here but not yet available. Furthermore we would need to add padding to mrtous.transform.RegionCrop as well as add a more sensible filtering such that we filter everything cropped below the target crop size. I will put this on hold for the moment (at least until the related issue is resolved in torchvision).

@bodokaiser
Copy link
Owner Author

bodokaiser commented Mar 6, 2017

Ran some tests on different transforms:

transforms:
- flipud
- fliplr
loss (50 epochs):
- testing: 26550, 26632, 28940 -> 27374
- training: 24228, 24042, 25311 -> 24527

transforms:
- none
loss (50 epochs):
- testing 57135, 29907, 32786 -> 39943
- training 41475, 24931, 26582 -> 30996

transforms:
- flipud, fliplr
- rotate, zoom
loss (50 epochs):
- testing 32743, 31821, 29175
- training 29044, 30113, 25738

Following this I disabled transform.RandomZoom and transform.RandomRotate for now. Also we are missing transforms on the color values.

Update:
When using 01, 12, 13 as training datasets we achieve loss of 26285 on testing dataset 11 so loss is in this case clearly model limited.

@bodokaiser
Copy link
Owner Author

Cross validation shouldn't be necessary for long so we discard this feature. Random transforms were applied separately to input and target such that their random values differ. This explains why loss is worse then without them. See #10 for more information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant