Skip to content

Commit

Permalink
Update checkpointing documentation to mark resume_from_checkpoint as …
Browse files Browse the repository at this point in the history
…deprecated (#20361) (#20477)

* Update checkpointing documentation to mark resume_from_checkpoint as deprecated

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update docs/source-pytorch/common/checkpointing_basic.rst

Co-authored-by: Luca Antiga <[email protected]>

* Update docs/source-pytorch/common/checkpointing_basic.rst

Co-authored-by: Luca Antiga <[email protected]>

* Address review comments

* Address review comments

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Luca Antiga <[email protected]>
Co-authored-by: Luca Antiga <[email protected]>
  • Loading branch information
4 people authored Dec 10, 2024
1 parent 030f36b commit 30545d6
Showing 1 changed file with 23 additions and 1 deletion.
24 changes: 23 additions & 1 deletion docs/source-pytorch/common/checkpointing_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,13 @@ PyTorch Lightning checkpoints are fully usable in plain PyTorch.

----

.. important::

**Important Update: Deprecated Method**

Starting from PyTorch Lightning v1.0.0, the `resume_from_checkpoint` argument has been deprecated. To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method.
Please update your code accordingly to avoid potential compatibility issues.

************************
Contents of a checkpoint
************************
Expand Down Expand Up @@ -197,16 +204,31 @@ You can disable checkpointing by passing:

----


*********************
Resume training state
*********************

If you don't just want to load weights, but instead restore the full training, do the following:

Correct usage:

.. code-block:: python
model = LitModel()
trainer = Trainer()
# automatically restores model, epoch, step, LR schedulers, etc...
trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt")
trainer.fit(model, ckpt_path="path/to/your/checkpoint.ckpt")
.. warning::

The argument `resume_from_checkpoint` has been deprecated in versions of PyTorch Lightning >= 1.0.0.
To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method instead.

Incorrect (deprecated) usage:

.. code-block:: python
trainer = Trainer(resume_from_checkpoint="path/to/your/checkpoint.ckpt")
trainer.fit(model)

0 comments on commit 30545d6

Please sign in to comment.