Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Hydra Configuration PL #1

Closed
wants to merge 91 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
91 commits
Select commit Hold shift + click to select a range
15cf6a8
Tpu logging (#2230)
williamFalcon Jun 18, 2020
476911d
Pid port + duplicate rank_zero logging (#2231)
williamFalcon Jun 18, 2020
a2d3ee8
final cleanup for v0.8.0 (#2181)
Borda Jun 18, 2020
79e1426
Docs clean-up (#2234)
williamFalcon Jun 18, 2020
e0b7359
[metrics] IoU Metric (#2062)
j-dsouza Jun 18, 2020
3478378
Update README.md
williamFalcon Jun 18, 2020
f8c10fb
Change PR template (#2224)
edenlightning Jun 18, 2020
4903f9e
Fixed the load_from_checkpoint path detected as URL bug (#2244)
Molaire Jun 18, 2020
b4044f0
Typo fix in metrics docs (#2237)
Rexhaif Jun 18, 2020
596a5d7
Docs new section (#2236)
Borda Jun 18, 2020
0b0c292
Update README.md
williamFalcon Jun 18, 2020
68a1e52
added barrier (#2245)
williamFalcon Jun 19, 2020
b7fc092
made fx public (#2247)
williamFalcon Jun 19, 2020
b5a2f1e
fix setup and on fit calls (#2252)
williamFalcon Jun 19, 2020
4885cfa
fix gpu template (#2255)
williamFalcon Jun 19, 2020
6ae9a97
remove frame inspection on self.hparams (#2253)
williamFalcon Jun 19, 2020
03ab574
decrease some training times (#2256)
williamFalcon Jun 19, 2020
57d5f6e
Barrier (#2257)
williamFalcon Jun 19, 2020
81720d9
fallback to hparams str (#2259)
williamFalcon Jun 19, 2020
a6f94a6
remove tpu barrier (#2260)
williamFalcon Jun 19, 2020
3c8c2e3
fix missing arg
williamFalcon Jun 19, 2020
e8f58b5
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
williamFalcon Jun 19, 2020
9739b3e
updates to changelog (#2248)
Borda Jun 19, 2020
d5f77c9
Release2 (#2262)
williamFalcon Jun 19, 2020
2fbc997
Update __init__.py
williamFalcon Jun 19, 2020
b2dd1a3
Update README.md
williamFalcon Jun 19, 2020
54acc79
continue 0.8.x (#2264)
Borda Jun 19, 2020
e780072
Attempt to add broken test to mimic transformers use case (#2272)
sshleifer Jun 19, 2020
8d51279
[refactor results 1] - refactor backward (#2276)
williamFalcon Jun 19, 2020
e0b7fed
deprecated Trainer proc_rank (#2269)
Borda Jun 19, 2020
3256fe4
Update progress.py (#2268)
pwl Jun 19, 2020
554fb47
Bugfix/_has_len (#2293)
thschaaf Jun 20, 2020
f278ac4
Revert/Fix: epoch indexing from 1, to be from 0 (#2289)
Borda Jun 20, 2020
b96dd21
Update new project code sample (#2287)
rohitgr7 Jun 20, 2020
7ecb0d2
test CLI parsing gpus (#2284)
Borda Jun 20, 2020
4b90b79
check omegaconf gpus (#2273)
Borda Jun 20, 2020
c7f8367
devel version (#2292)
Borda Jun 20, 2020
f972ab3
Fix summary hook handles not getting removed (#2298)
awaelchli Jun 20, 2020
63bd058
fix typo in forward return (#2301)
Rezyapkin-Vyacheslav Jun 21, 2020
92f122e
Fix average_precision metric (#2319)
elias-ramzi Jun 23, 2020
29179db
Fix ROC metric for CUDA tensors (#2304)
tridao Jun 23, 2020
0f07381
refactored training_batch + tests to verify correctness (#2328)
williamFalcon Jun 23, 2020
bdee1cd
update docs for "overfit_batches" (#2324)
awaelchli Jun 23, 2020
44385bb
Checking if the parameters are a DictConfig Object (#2216)
ssakhavi Jun 23, 2020
e085e93
Add missing test for "multiple dataloader + percent_check fix" (#2226)
awaelchli Jun 23, 2020
9446390
fix TPU parsing and TPU tests (#2094)
lezwon Jun 23, 2020
a915280
fixes slurm weights saving (#2339)
williamFalcon Jun 24, 2020
c09b2ff
test (#2341)
williamFalcon Jun 24, 2020
598f514
refactor training loop (#2336)
williamFalcon Jun 24, 2020
aab9e77
Fix lost compatibility with custom datatypes implementing `.to` (#2335)
awaelchli Jun 24, 2020
cc07dca
corrected example usage of save_hyperparameters from List[str] to sep…
david-waterworth Jun 25, 2020
9b2e605
Python logging level docs (#2348)
awaelchli Jun 25, 2020
220bb6d
remove wrong annotation (#2349)
awaelchli Jun 25, 2020
b6ab7ca
[docs] add community example : pl + ms nni (#2340)
davinnovation Jun 25, 2020
7360d36
configuration
anthonytec2 Jun 20, 2020
a11b8d0
fix job name template
anthonytec2 Jun 20, 2020
bc474ab
change to model
anthonytec2 Jun 20, 2020
bfb46dd
create hydra examples folder
anthonytec2 Jun 20, 2020
fcf5f6c
fix error with none values
anthonytec2 Jun 20, 2020
61f106c
optimizers and lr schedules
anthonytec2 Jun 21, 2020
f996bc0
clean up model structure
anthonytec2 Jun 21, 2020
86045b4
model has data included
anthonytec2 Jun 21, 2020
11fa90b
dont configure outputs
anthonytec2 Jun 21, 2020
657b3b8
document hydra example
anthonytec2 Jun 21, 2020
2ffac8c
update readme
anthonytec2 Jun 22, 2020
6f34389
rename trainer conf
anthonytec2 Jun 22, 2020
81fc466
scheduler example
anthonytec2 Jun 22, 2020
db96d4c
schedulers update
anthonytec2 Jun 22, 2020
4855215
change out structure for opt and sched
anthonytec2 Jun 22, 2020
021d9fc
flatten config dirs
anthonytec2 Jun 22, 2020
053e64d
reduce number of classes
anthonytec2 Jun 22, 2020
3bf9c95
scheduler and opt configs
anthonytec2 Jun 23, 2020
7170da3
spelling
anthonytec2 Jun 23, 2020
adf1570
change group
anthonytec2 Jun 23, 2020
51019a0
config store location change
anthonytec2 Jun 23, 2020
fff07a1
import and store
anthonytec2 Jun 23, 2020
fa92884
structured conf remaining classes
anthonytec2 Jun 24, 2020
5ec3ff6
fix for date
anthonytec2 Jun 24, 2020
aedb0a2
change location of trainer config
anthonytec2 Jun 24, 2020
9d3b7be
fix package name
anthonytec2 Jun 24, 2020
17ef88d
trainer instantiation
anthonytec2 Jun 24, 2020
feef5e5
clean up init trainer
anthonytec2 Jun 25, 2020
4986bbd
type fixes
anthonytec2 Jun 25, 2020
4976ce0
clean up imports
anthonytec2 Jun 25, 2020
12a5b73
update readme
anthonytec2 Jun 25, 2020
191521d
add in seed
anthonytec2 Jun 25, 2020
e7a8d0c
Update pl_examples/hydra_examples/README.md
anthonytec2 Jun 25, 2020
3c3d46a
Update pl_examples/hydra_examples/README.md
anthonytec2 Jun 25, 2020
872cba7
change to model
anthonytec2 Jun 25, 2020
b392654
clean up hydra example
anthonytec2 Jun 25, 2020
9c473bb
data to absolute path
anthonytec2 Jun 25, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 13 additions & 5 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,24 @@

## What does this PR do?

<!--
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
-->

Fixes # (issue)

# Before submitting

- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [ ] Did you read the [contributor guideline](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?
- [ ] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you create a separate PR for every change.
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
- [ ] Did you verify new and existing tests pass locally with your changes?
- [ ] If you made a notable change (that affects users), did you update the [CHANGELOG](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)?

<!-- For CHANGELOG separate each item in unreleased section by a blank line to reduce collisions -->

## What does this PR do?
Fixes # (issue).

## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Expand Down
47 changes: 37 additions & 10 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,45 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Changed

- Changed epoch indexing from 0 instead of 1 ([#2289](https://github.com/PyTorchLightning/pytorch-lightning/pull/2289))

### Deprecated

### Removed

### Fixed

- Fixed parsing TPU arguments and TPU tests ([#2094](https://github.com/PyTorchLightning/pytorch-lightning/pull/2094))

- Fixed number batches in case of multiple dataloaders and `limit_{*}_batches` ([#1920](https://github.com/PyTorchLightning/pytorch-lightning/pull/1920), [#2226](https://github.com/PyTorchLightning/pytorch-lightning/pull/2226))

- Fixed an issue with forward hooks not being removed after model summary ([#2298](https://github.com/PyTorchLightning/pytorch-lightning/pull/2298))

- Fixed ROC metric for CUDA tensors ([#2304](https://github.com/PyTorchLightning/pytorch-lightning/pull/2304))

- Fixed `average_precision` metric ([#2319](https://github.com/PyTorchLightning/pytorch-lightning/pull/2319))

- Fixed lost compatibility with custom datatypes implementing `.to` ([#2335](https://github.com/PyTorchLightning/pytorch-lightning/pull/2335))

## [0.8.0] - 2020-06-DD
## [0.8.1] - 2020-06-19

### Fixed

- Fixed the `load_from_checkpoint` path detected as URL bug ([#2244](https://github.com/PyTorchLightning/pytorch-lightning/pull/2244))
- Fixed hooks - added barrier ([#2245](https://github.com/PyTorchLightning/pytorch-lightning/pull/2245), [#2257](https://github.com/PyTorchLightning/pytorch-lightning/pull/2257), [#2260](https://github.com/PyTorchLightning/pytorch-lightning/pull/220))
- Fixed `hparams` - remove frame inspection on `self.hparams` ([#2253](https://github.com/PyTorchLightning/pytorch-lightning/pull/2253))
- Fixed setup and on fit calls ([#2252](https://github.com/PyTorchLightning/pytorch-lightning/pull/2252))
- Fixed GPU template ([#2255](https://github.com/PyTorchLightning/pytorch-lightning/pull/2255))

## [0.8.0] - 2020-06-18

### Added

- Added `overfit_batches`, `limit_{val|test}_batches` flags (overfit now uses training set for all three) ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
- Added metrics
* Base classes ([#1326](https://github.com/PyTorchLightning/pytorch-lightning/pull/1326), [#1877](https://github.com/PyTorchLightning/pytorch-lightning/pull/1877))
* Sklearn metrics classes ([#1327](https://github.com/PyTorchLightning/pytorch-lightning/pull/1327))
* Native torch metrics ([#1488](https://github.com/PyTorchLightning/pytorch-lightning/pull/1488))
* Native torch metrics ([#1488](https://github.com/PyTorchLightning/pytorch-lightning/pull/1488), [#2062](https://github.com/PyTorchLightning/pytorch-lightning/pull/2062))
* docs for all Metrics ([#2184](https://github.com/PyTorchLightning/pytorch-lightning/pull/2184), [#2209](https://github.com/PyTorchLightning/pytorch-lightning/pull/2209))
* Regression metrics ([#2221](https://github.com/PyTorchLightning/pytorch-lightning/pull/2221))
- Added type hints in `Trainer.fit()` and `Trainer.test()` to reflect that also a list of dataloaders can be passed in ([#1723](https://github.com/PyTorchLightning/pytorch-lightning/pull/1723))
Expand All @@ -37,9 +60,11 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added a model hook `transfer_batch_to_device` that enables moving custom data structures to the target device ([1756](https://github.com/PyTorchLightning/pytorch-lightning/pull/1756))
- Added [black](https://black.readthedocs.io/en/stable/) formatter for the code with code-checker on pull ([1610](https://github.com/PyTorchLightning/pytorch-lightning/pull/1610))
- Added back the slow spawn ddp implementation as `ddp_spawn` ([#2115](https://github.com/PyTorchLightning/pytorch-lightning/pull/2115))
- Added loading checkpoints from URLs ([#1667](https://github.com/PyTorchLightning/pytorch-lightning/issues/1667))
- Added loading checkpoints from URLs ([#1667](https://github.com/PyTorchLightning/pytorch-lightning/pull/1667))
- Added a callback method `on_keyboard_interrupt` for handling KeyboardInterrupt events during training ([#2134](https://github.com/PyTorchLightning/pytorch-lightning/pull/2134))
- Added a decorator `auto_move_data` that moves data to the correct device when using the LightningModule for inference ([#1905](https://github.com/PyTorchLightning/pytorch-lightning/pull/1905))
- Added `ckpt_path` option to `LightningModule.test(...)` to load particular checkpoint ([#2190](https://github.com/PyTorchLightning/pytorch-lightning/pull/2190))
- Added `setup` and `teardown` hooks for model ([#2229](https://github.com/PyTorchLightning/pytorch-lightning/pull/2229))

### Changed

Expand All @@ -50,18 +75,19 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Re-Enable Logger's `ImportError`s ([#1938](https://github.com/PyTorchLightning/pytorch-lightning/pull/1938))
- Changed the default value of the Trainer argument `weights_summary` from `full` to `top` ([#2029](https://github.com/PyTorchLightning/pytorch-lightning/pull/2029))
- Raise an error when lightning replaces an existing sampler ([#2020](https://github.com/PyTorchLightning/pytorch-lightning/pull/2020))
- Enabled prepare_data from correct processes - clarify local vs global rank ([#2166](https://github.com/PyTorchLightning/pytorch-lightning/pull/2166))
- Enabled `prepare_data` from correct processes - clarify local vs global rank ([#2166](https://github.com/PyTorchLightning/pytorch-lightning/pull/2166))
- Remove explicit flush from tensorboard logger ([#2126](https://github.com/PyTorchLightning/pytorch-lightning/pull/2126))
- Changed epoch/step indexing from 1 instead of 0 ([#2206](https://github.com/PyTorchLightning/pytorch-lightning/pull/2206))
- Changed epoch indexing from 1 instead of 0 ([#2206](https://github.com/PyTorchLightning/pytorch-lightning/pull/2206))

### Deprecated

- Deprecated flags: ([#2213](https://github.com/PyTorchLightning/pytorch-lightning/pull/2213))
* `overfit_pct` >> `overfit_batches`
* `val_percent_check` >> `limit_val_batches`
* `test_percent_check` >> `limit_test_batches`
* `overfit_pct` in favour of `overfit_batches`
* `val_percent_check` in favour of `limit_val_batches`
* `test_percent_check` in favour of `limit_test_batches`
- Deprecated `ModelCheckpoint`'s attributes `best` and `kth_best_model` ([#1799](https://github.com/PyTorchLightning/pytorch-lightning/pull/1799))
- Dropped official support/testing for older PyTorch versions <1.3 ([#1917](https://github.com/PyTorchLightning/pytorch-lightning/pull/1917))
- Deprecated Trainer `proc_rank` in favour of `global_rank` ([#2166](https://github.com/PyTorchLightning/pytorch-lightning/pull/2166), [#2269](https://github.com/PyTorchLightning/pytorch-lightning/pull/2269))

### Removed

Expand All @@ -77,7 +103,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Run graceful training teardown on interpreter exit ([#1631](https://github.com/PyTorchLightning/pytorch-lightning/pull/1631))
- Fixed user warning when apex was used together with learning rate schedulers ([#1873](https://github.com/PyTorchLightning/pytorch-lightning/pull/1873))
- Fixed multiple calls of `EarlyStopping` callback ([#1751](https://github.com/PyTorchLightning/pytorch-lightning/issues/1751))
- Fixed multiple calls of `EarlyStopping` callback ([#1863](https://github.com/PyTorchLightning/pytorch-lightning/pull/1863))
- Fixed an issue with `Trainer.from_argparse_args` when passing in unknown Trainer args ([#1932](https://github.com/PyTorchLightning/pytorch-lightning/pull/1932))
- Fixed bug related to logger not being reset correctly for model after tuner algorithms ([#1933](https://github.com/PyTorchLightning/pytorch-lightning/pull/1933))
- Fixed root node resolution for SLURM cluster with dash in host name ([#1954](https://github.com/PyTorchLightning/pytorch-lightning/pull/1954))
Expand All @@ -89,8 +115,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed an issue with `_auto_collect_arguments` collecting local variables that are not constructor arguments and not working for signatures that have the instance not named `self` ([#2048](https://github.com/PyTorchLightning/pytorch-lightning/pull/2048))
- Fixed mistake in parameters' grad norm tracking ([#2012](https://github.com/PyTorchLightning/pytorch-lightning/pull/2012))
- Fixed CPU and hanging GPU crash ([#2118](https://github.com/PyTorchLightning/pytorch-lightning/pull/2118))

- Fixed an issue with the model summary and `example_input_array` depending on a specific ordering of the submodules in a LightningModule ([#1773](https://github.com/PyTorchLightning/pytorch-lightning/pull/1773))
- Fixed Tpu logging ([#2230](https://github.com/PyTorchLightning/pytorch-lightning/pull/2230))
- Fixed Pid port + duplicate `rank_zero` logging ([#2140](https://github.com/PyTorchLightning/pytorch-lightning/pull/2140), [#2231](https://github.com/PyTorchLightning/pytorch-lightning/pull/2231))

## [0.7.6] - 2020-05-16

Expand Down
14 changes: 13 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,13 @@
-->
</div>

---
## Trending contributors

[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/0)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/1)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/2)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/3)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/4)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/5)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/6)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6)[![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/7)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7)

---

## Continuous Integration
<center>

Expand All @@ -47,6 +53,8 @@ conda install pytorch-lightning -c conda-forge

## Docs
- [master](https://pytorch-lightning.readthedocs.io/en/latest)
- [stable](https://pytorch-lightning.readthedocs.io/en/stable)
- [0.8.1](https://pytorch-lightning.readthedocs.io/en/0.8.1/)
- [0.7.6](https://pytorch-lightning.readthedocs.io/en/0.7.6/)
- [0.7.5](https://pytorch-lightning.readthedocs.io/en/0.7.5/)
- [0.7.3](https://pytorch-lightning.readthedocs.io/en/0.7.3/)
Expand Down Expand Up @@ -357,6 +365,7 @@ Check out this awesome list of research papers and implementations done with Lig
- [Transformers text classification](https://github.com/ricardorei/lightning-text-classification)
- [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE)
- [Finetune BERT, RoBERTa etc on QA Datasets like SQuAD](https://github.com/tshrjn/Finetune-QA/)
- [Pytorch-Lightning + Microsoft NNI with Docker](https://github.com/davinnovation/pytorch-boilerplate)

## Tutorials
Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started.
Expand All @@ -374,6 +383,7 @@ If you have any questions, feel free to:
4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A).

---

## FAQ
**How do I use Lightning for rapid research?**
[Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html)
Expand Down Expand Up @@ -440,6 +450,8 @@ pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.
- Adrian Wälchli [(awaelchli)](https://github.com/awaelchli)
- Nicki Skafte [(skaftenicki)](https://github.com/SkafteNicki)

---

#### Funding
Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can
hire a full-time staff, attend conferences, and move faster through implementing features you request.
Expand All @@ -456,7 +468,7 @@ If you want to cite the framework feel free to use this (but only if you loved i
@article{falcon2019pytorch,
title={PyTorch Lightning},
author={Falcon, WA},
journal={GitHub. Note: https://github. com/williamFalcon/pytorch-lightning Cited by},
journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by},
volume={3},
year={2019}
}
Expand Down
6 changes: 5 additions & 1 deletion docs/source/apex.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@
=================
Lightning offers 16-bit training for CPUs, GPUs and TPUs.

----------

GPU 16-bit
-----------
----------
16 bit precision can cut your memory footprint by half.
If using volta architecture GPUs it can give a dramatic training speed-up as well.

Expand Down Expand Up @@ -67,6 +69,8 @@ Enable 16-bit
If you need to configure the apex init for your particular use case or want to use a different way of doing
16-bit training, override :meth:`pytorch_lightning.core.LightningModule.configure_apex`.

----------

TPU 16-bit
----------
16-bit on TPus is much simpler. To use 16-bit with TPUs set precision to 16 when using the tpu flag
Expand Down
12 changes: 6 additions & 6 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Example:
We successfully extended functionality without polluting our super clean
:class:`~pytorch_lightning.core.LightningModule` research code.

---
----------------

.. automodule:: pytorch_lightning.callbacks.base
:noindex:
Expand All @@ -56,7 +56,7 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---
----------------

.. automodule:: pytorch_lightning.callbacks.early_stopping
:noindex:
Expand All @@ -66,7 +66,7 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---
----------------

.. automodule:: pytorch_lightning.callbacks.gradient_accumulation_scheduler
:noindex:
Expand All @@ -76,15 +76,15 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---
----------------

.. automodule:: pytorch_lightning.callbacks.lr_logger
:noindex:
:exclude-members:
_extract_lr,
_find_names

---
----------------

.. automodule:: pytorch_lightning.callbacks.model_checkpoint
:noindex:
Expand All @@ -94,7 +94,7 @@ We successfully extended functionality without polluting our super clean
_abc_impl,
check_monitor_top_k,

---
----------------

.. automodule:: pytorch_lightning.callbacks.progress
:noindex:
Expand Down
3 changes: 2 additions & 1 deletion docs/source/child_modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ that change in the `Autoencoder` model are the init, forward, training, validati

def forward(self, x):
generated = self.decoder(x)

return generated

def training_step(self, batch, batch_idx):
x, _ = batch

Expand Down
16 changes: 8 additions & 8 deletions docs/source/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Debugging
=========
The following are flags that make debugging much easier.

---
----------------

fast_dev_run
------------
Expand All @@ -21,7 +21,7 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

trainer = Trainer(fast_dev_run=True)

---
----------------

Inspect gradient norms
----------------------
Expand All @@ -35,7 +35,7 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
# the 2-norm
trainer = Trainer(track_grad_norm=2)

---
----------------

Log GPU usage
-------------
Expand All @@ -48,7 +48,7 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

trainer = Trainer(log_gpu_memory=True)

---
----------------

Make model overfit on subset of data
------------------------------------
Expand All @@ -61,7 +61,7 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. testcode::

# use only 1% of training data (and use the same training Dataloader (with shuffle off) in val and test)
# use only 1% of training data (and use the same training dataloader (with shuffle off) in val and test)
trainer = Trainer(overfit_batches=0.01)

# or overfit a number of batches
Expand All @@ -70,7 +70,7 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
With this flag, the train, val, and test sets will all be the same train set. We will also replace the sampler
in the training set to turn off shuffle for you.

---
----------------

Print a summary of your LightningModule
---------------------------------------
Expand Down Expand Up @@ -99,7 +99,7 @@ See Also:
- :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary` Trainer argument
- :class:`~pytorch_lightning.core.memory.ModelSummary`

---
----------------

Shorten epochs
--------------
Expand All @@ -116,7 +116,7 @@ On larger datasets like Imagenet, this can help you debug or test a few things f
# use 10 batches of train and 5 batches of val
trainer = Trainer(limit_train_batches=10, limit_val_batches=5)

---
----------------

Set the number of validation sanity steps
-----------------------------------------
Expand Down
6 changes: 6 additions & 0 deletions docs/source/early_stopping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,16 @@ You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightni

If you do this repeatedly, for every epoch you had originally requested, then this will stop your entire run.

----------

Default Epoch End Callback Behavior
-----------------------------------
By default early stopping will be enabled if `'val_loss'`
is found in :meth:`~pytorch_lightning.core.lightning.LightningModule.validation_epoch_end`'s
return dict. Otherwise training will proceed with early stopping disabled.

----------

Enable Early Stopping using the EarlyStopping Callback
------------------------------------------------------
The
Expand Down Expand Up @@ -81,6 +85,8 @@ and change where it is called:
- :class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`

----------

Disable Early Stopping with callbacks on epoch end
--------------------------------------------------
To disable early stopping pass ``False`` to the
Expand Down
Loading