Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move tensorboardX to extra #16349

Merged
merged 18 commits into from
Jan 16, 2023
2 changes: 1 addition & 1 deletion .github/workflows/ci-pkg-install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -99,4 +99,4 @@ jobs:
pip install -q "pytest-doctestplus>=0.9.0"
pip list
PKG_NAME=$(python -c "print({'app': 'lit_app', 'fabric': 'lit_fabric', 'pytorch': 'pl'}.get('${{matrix.pkg-name}}', ''))")
python -m pytest src/${PKG_NAME} --ignore-glob="**/cli/*-template/**"
python -m pytest src/${PKG_NAME} --ignore-glob="**/cli/*-template/**" --doctest-plus
3 changes: 2 additions & 1 deletion docs/source-pytorch/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -817,10 +817,11 @@ logger
:doc:`Logger <../visualize/loggers>` (or iterable collection of loggers) for experiment tracking. A ``True`` value uses the default ``TensorBoardLogger`` shown below. ``False`` will disable logging.

.. testcode::
:skipif: not _TENSORBOARD_AVAILABLE and not _TENSORBOARDX_AVAILABLE

from pytorch_lightning.loggers import TensorBoardLogger

# default logger used by trainer
# default logger used by trainer (if tensorboard is installed)
logger = TensorBoardLogger(save_dir=os.getcwd(), version=1, name="lightning_logs")
Trainer(logger=logger)

Expand Down
1 change: 1 addition & 0 deletions docs/source-pytorch/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -400,6 +400,7 @@ def package_list_from_file(file):
from pytorch_lightning.utilities import (
_TORCHVISION_AVAILABLE,
)
from lightning_fabric.loggers.tensorboard import _TENSORBOARD_AVAILABLE, _TENSORBOARDX_AVAILABLE
from pytorch_lightning.loggers.neptune import _NEPTUNE_AVAILABLE
from pytorch_lightning.loggers.comet import _COMET_AVAILABLE
from pytorch_lightning.loggers.mlflow import _MLFLOW_AVAILABLE
Expand Down
5 changes: 3 additions & 2 deletions docs/source-pytorch/extensions/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ To visualize tensorboard in a jupyter notebook environment, run the following co
You can also pass a custom Logger to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.

.. testcode::
:skipif: not _TENSORBOARD_AVAILABLE and not _TENSORBOARDX_AVAILABLE

from pytorch_lightning import loggers as pl_loggers

Expand All @@ -79,7 +80,7 @@ Choose from any of the others such as MLflow, Comet, Neptune, WandB, etc.
To use multiple loggers, simply pass in a ``list`` or ``tuple`` of loggers.

.. testcode::
:skipif: not _COMET_AVAILABLE
:skipif: (not _TENSORBOARD_AVAILABLE and not _TENSORBOARDX_AVAILABLE) or not _COMET_AVAILABLE

tb_logger = pl_loggers.TensorBoardLogger(save_dir="logs/")
comet_logger = pl_loggers.CometLogger(save_dir="logs/")
Expand Down Expand Up @@ -378,7 +379,7 @@ When Lightning creates a checkpoint, it stores a key ``"hyper_parameters"`` with

Some loggers also allow logging the hyperparams used in the experiment. For instance,
when using the ``TensorBoardLogger``, all hyperparams will show
in the `hparams tab <https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_hparams>`_.
in the hparams tab at :meth:`torch.utils.tensorboard.writer.SummaryWriter.add_hparams`.

.. note::
If you want to track a metric in the tensorboard hparams tab, log scalars to the key ``hp_metric``. If tracking multiple metrics, initialize ``TensorBoardLogger`` with ``default_hp_metric=False`` and call ``log_hyperparams`` only once with your metric keys and initial values. Subsequent updates can simply be logged to the metric keys. Refer to the examples below for setting up proper hyperparams metrics tracking within the :doc:`LightningModule <../common/lightning_module>`.
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/starter/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ A LightningModule enables your PyTorch nn.Module to play together in complex way
z = self.encoder(x)
x_hat = self.decoder(z)
loss = nn.functional.mse_loss(x_hat, x)
# Logging to TensorBoard by default
# Logging to TensorBoard (if installed) by default
self.log("train_loss", loss)
return loss

Expand Down Expand Up @@ -218,7 +218,7 @@ Once you've trained the model you can export to onnx, torchscript and put it int
*********************
6: Visualize training
*********************
Lightning comes with a *lot* of batteries included. A helpful one is Tensorboard for visualizing experiments.
If you have tensorboard installed, you can use it for visualizing experiments.

Run this on your commandline and open your browser to **http://localhost:6006/**

Expand Down
6 changes: 3 additions & 3 deletions docs/source-pytorch/visualize/logging_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,13 @@ TODO: need progress bar here

View in the browser
===================
To view metrics in the browser you need to use an *experiment manager* with these capabilities. By Default, Lightning uses Tensorboard which is free and opensource.
To view metrics in the browser you need to use an *experiment manager* with these capabilities.

Tensorboard is already enabled by default
By Default, Lightning uses Tensorboard (if available) and a simple CSV logger otherwise.

.. code-block:: python

# every trainer already has tensorboard enabled by default
# every trainer already has tensorboard enabled by default (if the dependency is available)
trainer = Trainer()

To launch the tensorboard dashboard run the following command on the commandline.
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/visualize/supported_exp_managers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Here's the full documentation for the :class:`~pytorch_lightning.loggers.Neptune

Tensorboard
===========
`TensorBoard <https://pytorch.org/docs/stable/tensorboard.html>`_ already comes installed with Lightning. If you removed the install install the following package.
`TensorBoard <https://pytorch.org/docs/stable/tensorboard.html>`_ can be installed with:

.. code-block:: bash

Expand Down Expand Up @@ -179,7 +179,7 @@ Use multiple exp managers
To use multiple experiment managers at the same time, pass a list to the *logger* :class:`~pytorch_lightning.trainer.trainer.Trainer` argument.

.. testcode::
:skipif: not _WANDB_AVAILABLE
:skipif: (not _TENSORBOARD_AVAILABLE and not _TENSORBOARDX_AVAILABLE) or not _WANDB_AVAILABLE

from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger

Expand Down
1 change: 0 additions & 1 deletion requirements/pytorch/base.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ torch>=1.10.0, <=1.13.1
tqdm>=4.57.0, <4.65.0
PyYAML>=5.4, <=6.0
fsspec[http]>2021.06.0, <2022.8.0
tensorboardX>=2.2, <=2.5.1 # min version is set by torch.onnx missing attribute
torchmetrics>=0.7.0, <0.10.1 # needed for using fixed compare_version
packaging>=17.0, <=21.3
typing-extensions>=4.0.0, <=4.4.0
Expand Down
1 change: 1 addition & 0 deletions requirements/pytorch/extra.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ omegaconf>=2.0.5, <2.3.0
hydra-core>=1.0.5, <1.3.0
jsonargparse[signatures]>=4.18.0, <4.19.0
rich>=10.14.0, !=10.15.0.a, <13.0.0
tensorboardX>=2.2, <=2.5.1 # min version is set by torch.onnx missing attribute
carmocca marked this conversation as resolved.
Show resolved Hide resolved
1 change: 1 addition & 0 deletions requirements/pytorch/loggers.info
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@ neptune-client
comet-ml
mlflow>=1.0.0
wandb
tensorboard>=2.9.1
3 changes: 2 additions & 1 deletion src/pytorch_lightning/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added support for colossalai 0.1.11 ([#15888](https://github.com/Lightning-AI/lightning/pull/15888))
- Added `LightningCLI` support for optimizer and learning schedulers via callable type dependency injection ([#15869](https://github.com/Lightning-AI/lightning/pull/15869))
- Added support for activation checkpointing for the `DDPFullyShardedNativeStrategy` strategy ([#15826](https://github.com/Lightning-AI/lightning/pull/15826))
- Added the option to set `DDPFullyShardedNativeStrategy(cpu_offload=True|False)` via bool instead of needing to pass a configufation object ([#15832](https://github.com/Lightning-AI/lightning/pull/15832))
- Added the option to set `DDPFullyShardedNativeStrategy(cpu_offload=True|False)` via bool instead of needing to pass a configuration object ([#15832](https://github.com/Lightning-AI/lightning/pull/15832))
- Added info message for Ampere CUDA GPU users to enable tf32 matmul precision ([#16037](https://github.com/Lightning-AI/lightning/pull/16037))
- Added support for returning optimizer-like classes in `LightningModule.configure_optimizers` ([#16189](https://github.com/Lightning-AI/lightning/pull/16189))

Expand All @@ -35,6 +35,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- The Trainer now raises an error if it is given multiple stateful callbacks of the same time with colliding state keys ([#15634](https://github.com/Lightning-AI/lightning/pull/15634))
- `MLFlowLogger` now logs hyperparameters and metrics in batched API calls ([#15915](https://github.com/Lightning-AI/lightning/pull/15915))
- Overriding the `on_train_batch_{start,end}` hooks in conjunction with taking a `dataloader_iter` in the `training_step` no longer errors out and instead shows a warning ([#16062](https://github.com/Lightning-AI/lightning/pull/16062))
- Move `tensorboardX` to extra dependencies. Use the `CSVLogger` by default ([#16349](https://github.com/Lightning-AI/lightning/pull/16349))


### Deprecated
Expand Down
6 changes: 5 additions & 1 deletion src/pytorch_lightning/loggers/tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
from torch import Tensor

import pytorch_lightning as pl
from lightning_fabric.loggers.tensorboard import _TENSORBOARD_AVAILABLE
from lightning_fabric.loggers.tensorboard import _TENSORBOARD_AVAILABLE, _TENSORBOARDX_AVAILABLE
from lightning_fabric.loggers.tensorboard import TensorBoardLogger as FabricTensorBoardLogger
from lightning_fabric.utilities.logger import _convert_params
from lightning_fabric.utilities.types import _PATH
Expand All @@ -39,6 +39,10 @@
if _OMEGACONF_AVAILABLE:
from omegaconf import Container, OmegaConf

# Skip doctests if requirements aren't available
if not (_TENSORBOARD_AVAILABLE or _TENSORBOARDX_AVAILABLE):
__doctest_skip__ = ["TensorBoardLogger", "TensorBoardLogger.*"]


class TensorBoardLogger(Logger, FabricTensorBoardLogger):
r"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,20 @@
from typing import Any, Iterable, Optional, Union

from lightning_utilities.core.apply_func import apply_to_collection
from lightning_utilities.core.rank_zero import WarningCache
from torch import Tensor

import pytorch_lightning as pl
from lightning_fabric.loggers import CSVLogger
from lightning_fabric.loggers.tensorboard import _TENSORBOARD_AVAILABLE, _TENSORBOARDX_AVAILABLE
from lightning_fabric.plugins.environments import SLURMEnvironment
from lightning_fabric.utilities import move_data_to_device
from lightning_fabric.utilities.apply_func import convert_tensors_to_scalars
from pytorch_lightning.loggers import Logger, TensorBoardLogger
from pytorch_lightning.trainer.connectors.logger_connector.result import _METRICS, _OUT_DICT, _PBAR_DICT

warning_cache = WarningCache()


class LoggerConnector:
def __init__(self, trainer: "pl.Trainer") -> None:
Expand Down Expand Up @@ -57,9 +62,18 @@ def configure_logger(self, logger: Union[bool, Logger, Iterable[Logger]]) -> Non
self.trainer.loggers = []
elif logger is True:
# default logger
self.trainer.loggers = [
TensorBoardLogger(save_dir=self.trainer.default_root_dir, version=SLURMEnvironment.job_id())
]
if _TENSORBOARD_AVAILABLE or _TENSORBOARDX_AVAILABLE:
logger_ = TensorBoardLogger(save_dir=self.trainer.default_root_dir, version=SLURMEnvironment.job_id())
else:
warning_cache.warn(
"Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `pytorch_lightning`"
" package, due to potential conflicts with other packages in the ML ecosystem. For this reason,"
" `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard`"
" or `tensorboardX` packages are found."
" Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default"
)
logger_ = CSVLogger(root_dir=self.trainer.default_root_dir) # type: ignore[assignment]
self.trainer.loggers = [logger_]
elif isinstance(logger, Iterable):
self.trainer.loggers = list(logger)
else:
Expand Down
6 changes: 3 additions & 3 deletions src/pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -286,9 +286,9 @@ def __init__(
Default: ``1.0``.

logger: Logger (or iterable collection of loggers) for experiment tracking. A ``True`` value uses
the default ``TensorBoardLogger``. ``False`` will disable logging. If multiple loggers are
provided, local files (checkpoints, profiler traces, etc.) are saved in the ``log_dir`` of
the first logger.
the default ``TensorBoardLogger`` if it is installed, otherwise ``CSVLogger``.
``False`` will disable logging. If multiple loggers are provided, local files
(checkpoints, profiler traces, etc.) are saved in the ``log_dir`` of he first logger.
Default: ``True``.

log_every_n_steps: How often to log within steps.
Expand Down