diff --git a/docs/source-pytorch/common/checkpointing_basic.rst b/docs/source-pytorch/common/checkpointing_basic.rst index 5c74178f0eaaa..1026e972849ef 100644 --- a/docs/source-pytorch/common/checkpointing_basic.rst +++ b/docs/source-pytorch/common/checkpointing_basic.rst @@ -20,6 +20,13 @@ PyTorch Lightning checkpoints are fully usable in plain PyTorch. ---- +.. important:: + + **Important Update: Deprecated Method** + + Starting from PyTorch Lightning v1.0.0, the `resume_from_checkpoint` argument has been deprecated. To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method. + Please update your code accordingly to avoid potential compatibility issues. + ************************ Contents of a checkpoint ************************ @@ -197,16 +204,31 @@ You can disable checkpointing by passing: ---- + ********************* Resume training state ********************* If you don't just want to load weights, but instead restore the full training, do the following: +Correct usage: + .. code-block:: python model = LitModel() trainer = Trainer() # automatically restores model, epoch, step, LR schedulers, etc... - trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt") + trainer.fit(model, ckpt_path="path/to/your/checkpoint.ckpt") + +.. warning:: + + The argument `resume_from_checkpoint` has been deprecated in versions of PyTorch Lightning >= 1.0.0. + To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method instead. + +Incorrect (deprecated) usage: + +.. code-block:: python + + trainer = Trainer(resume_from_checkpoint="path/to/your/checkpoint.ckpt") + trainer.fit(model)