Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename profiler directory to profilers #12308

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@
/src/pytorch_lightning/loops @tchaton @awaelchli @justusschock @carmocca
/src/pytorch_lightning/overrides @tchaton @SeanNaren @borda
/src/pytorch_lightning/plugins @tchaton @SeanNaren @awaelchli @justusschock
/src/pytorch_lightning/profiler @williamfalcon @tchaton @borda @carmocca
/src/pytorch_lightning/profiler/pytorch.py @nbcsm @guotuofeng
/src/pytorch_lightning/profilers @williamfalcon @tchaton @borda @carmocca
/src/pytorch_lightning/profilers/pytorch.py @nbcsm @guotuofeng
/src/pytorch_lightning/strategies @tchaton @SeanNaren @awaelchli @justusschock @kaushikb11
/src/pytorch_lightning/trainer @williamfalcon @borda @tchaton @SeanNaren @carmocca @awaelchli @justusschock @kaushikb11
/src/pytorch_lightning/trainer/connectors @tchaton @SeanNaren @carmocca @borda
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated LightningCLI's registries in favor of importing the respective package ([#13221](https://github.com/PyTorchLightning/pytorch-lightning/pull/13221))



- Deprecated `pytorch_lightning.profiler` in favor of `pytorch_lightning.profilers` ([#12308](https://github.com/PyTorchLightning/pytorch-lightning/pull/12308))
carmocca marked this conversation as resolved.
Show resolved Hide resolved


### Removed

- Removed the deprecated `Logger.close` method ([#13149](https://github.com/PyTorchLightning/pytorch-lightning/pull/13149))
Expand Down
2 changes: 1 addition & 1 deletion dockers/tpu-tests/tpu_test_cases.jsonnet
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ local tputests = base.BaseTest {
# TODO (@kaushikb11): Add device stats tests here
coverage run --source=pytorch_lightning -m pytest -v --capture=no \
strategies/test_tpu_spawn.py \
profiler/test_xla_profiler.py \
profilers/test_xla_profiler.py \
accelerators/test_tpu.py \
models/test_tpu.py \
plugins/environments/test_xla_environment.py
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/api_references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ others
profiler
--------

.. currentmodule:: pytorch_lightning.profiler
.. currentmodule:: pytorch_lightning.profilers

.. autosummary::
:toctree: api
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1213,7 +1213,7 @@ See the :doc:`profiler documentation <../tuning/profiler>`. for more details.

.. testcode::

from pytorch_lightning.profiler import SimpleProfiler, AdvancedProfiler
from pytorch_lightning.profilers import SimpleProfiler, AdvancedProfiler

# default used by the Trainer
trainer = Trainer(profiler=None)
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/tuning/profiler_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ Find bottlenecks in your code (advanced)
************************
Profile cloud TPU models
************************
To profile TPU models use the :class:`~pytorch_lightning.profiler.xla.XLAProfiler`
To profile TPU models use the :class:`~pytorch_lightning.profilers.xla.XLAProfiler`

.. code-block:: python

from pytorch_lightning.profiler import XLAProfiler
from pytorch_lightning.profilers import XLAProfiler

profiler = XLAProfiler(port=9001)
trainer = Trainer(profiler=profiler)
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/tuning/profiler_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The simple profiler measures all the standard methods used in the training loop
**************************************
Profile the time within every function
**************************************
To profile the time within every function, use the :class:`~pytorch_lightning.profiler.advanced.AdvancedProfiler` built on top of Python's `cProfiler <https://docs.python.org/3/library/profile.html#module-cProfile>`_.
To profile the time within every function, use the :class:`~pytorch_lightning.profilers.advanced.AdvancedProfiler` built on top of Python's `cProfiler <https://docs.python.org/3/library/profile.html#module-cProfile>`_.


.. code-block:: python
Expand Down Expand Up @@ -101,7 +101,7 @@ If the profiler report becomes too long, you can stream the report to a file:

.. code-block:: python

from pytorch_lightning.profiler import AdvancedProfiler
from pytorch_lightning.profilers import AdvancedProfiler

profiler = AdvancedProfiler(dirpath=".", filename="perf_logs")
trainer = Trainer(profiler=profiler)
Expand Down
8 changes: 4 additions & 4 deletions docs/source-pytorch/tuning/profiler_expert.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ Find bottlenecks in your code (expert)
***********************
Build your own profiler
***********************
To build your own profiler, subclass :class:`~pytorch_lightning.profiler.base.Profiler`
To build your own profiler, subclass :class:`~pytorch_lightning.profilers.profiler.Profiler`
and override some of its methods. Here is a simple example that profiles the first occurrence and total calls of each action:

.. code-block:: python

from pytorch_lightning.profiler import Profiler
from pytorch_lightning.profilers import Profiler
from collections import defaultdict
import time

Expand Down Expand Up @@ -69,7 +69,7 @@ To profile a specific action of interest, reference a profiler in the LightningM

.. code-block:: python

from pytorch_lightning.profiler import SimpleProfiler, PassThroughProfiler
from pytorch_lightning.profilers import SimpleProfiler, PassThroughProfiler


class MyModel(LightningModule):
Expand All @@ -90,7 +90,7 @@ Here's the full code:

.. code-block:: python

from pytorch_lightning.profiler import SimpleProfiler, PassThroughProfiler
from pytorch_lightning.profilers import SimpleProfiler, PassThroughProfiler


class MyModel(LightningModule):
Expand Down
12 changes: 6 additions & 6 deletions docs/source-pytorch/tuning/profiler_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ Find bottlenecks in your code (intermediate)
**************************
Profile pytorch operations
**************************
To understand the cost of each PyTorch operation, use the :class:`~pytorch_lightning.profiler.pytorch.PyTorchProfiler` built on top of the `PyTorch profiler <https://pytorch.org/docs/master/profiler.html>`__.
To understand the cost of each PyTorch operation, use the :class:`~pytorch_lightning.profilers.pytorch.PyTorchProfiler` built on top of the `PyTorch profiler <https://pytorch.org/docs/master/profiler.html>`__.

.. code-block:: python

from pytorch_lightning.profiler import PyTorchProfiler
from pytorch_lightning.profilers import PyTorchProfiler

profiler = PyTorchProfiler()
trainer = Trainer(profiler=profiler)
Expand Down Expand Up @@ -65,11 +65,11 @@ The profiler will generate an output like this:
***************************
Profile a distributed model
***************************
To profile a distributed model, use the :class:`~pytorch_lightning.profiler.pytorch.PyTorchProfiler` with the *filename* argument which will save a report per rank.
To profile a distributed model, use the :class:`~pytorch_lightning.profilers.pytorch.PyTorchProfiler` with the *filename* argument which will save a report per rank.

.. code-block:: python

from pytorch_lightning.profiler import PyTorchProfiler
from pytorch_lightning.profilers import PyTorchProfiler

profiler = PyTorchProfiler(filename="perf-logs")
trainer = Trainer(profiler=profiler)
Expand Down Expand Up @@ -153,11 +153,11 @@ to extend the scope of profiled functions.
*****************************
Visualize profiled operations
*****************************
To visualize the profiled operations, enable **emit_nvtx** in the :class:`~pytorch_lightning.profiler.pytorch.PyTorchProfiler`.
To visualize the profiled operations, enable **emit_nvtx** in the :class:`~pytorch_lightning.profilers.pytorch.PyTorchProfiler`.

.. code-block:: python

from pytorch_lightning.profiler import PyTorchProfiler
from pytorch_lightning.profilers import PyTorchProfiler

profiler = PyTorchProfiler(emit_nvtx=True)
trainer = Trainer(profiler=profiler)
Expand Down
2 changes: 1 addition & 1 deletion examples/pl_basics/profiler_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
import torchvision.transforms as T

from pytorch_lightning import cli_lightning_logo, LightningDataModule, LightningModule
from pytorch_lightning.profiler.pytorch import PyTorchProfiler
from pytorch_lightning.profilers.pytorch import PyTorchProfiler
from pytorch_lightning.utilities.cli import LightningCLI

DEFAULT_CMD_LINE = (
Expand Down
8 changes: 4 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -78,10 +78,10 @@ module = [
"pytorch_lightning.strategies.single_tpu",
"pytorch_lightning.strategies.tpu_spawn",
"pytorch_lightning.strategies.strategy",
"pytorch_lightning.profiler.advanced",
"pytorch_lightning.profiler.base",
"pytorch_lightning.profiler.pytorch",
"pytorch_lightning.profiler.simple",
"pytorch_lightning.profilers.advanced",
"pytorch_lightning.profilers.base",
"pytorch_lightning.profilers.pytorch",
"pytorch_lightning.profilers.simple",
"pytorch_lightning.trainer.callback_hook",
"pytorch_lightning.trainer.connectors.callback_connector",
"pytorch_lightning.trainer.connectors.data_connector",
Expand Down
13 changes: 7 additions & 6 deletions src/pytorch_lightning/profiler/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pytorch_lightning.profiler.advanced import AdvancedProfiler
from pytorch_lightning.profiler.base import AbstractProfiler, BaseProfiler, PassThroughProfiler
from pytorch_lightning.profiler.profiler import Profiler
from pytorch_lightning.profiler.pytorch import PyTorchProfiler
from pytorch_lightning.profiler.simple import SimpleProfiler
from pytorch_lightning.profiler.xla import XLAProfiler
from pytorch_lightning.profiler.base import AbstractProfiler, BaseProfiler
from pytorch_lightning.profilers.advanced import AdvancedProfiler
from pytorch_lightning.profilers.base import PassThroughProfiler
from pytorch_lightning.profilers.profiler import Profiler
from pytorch_lightning.profilers.pytorch import PyTorchProfiler
from pytorch_lightning.profilers.simple import SimpleProfiler
from pytorch_lightning.profilers.xla import XLAProfiler

__all__ = [
"AbstractProfiler",
Expand Down
83 changes: 8 additions & 75 deletions src/pytorch_lightning/profiler/advanced.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,81 +11,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Profiler to check if there are any bottlenecks in your code."""
import cProfile
import io
import logging
import pstats
from pathlib import Path
from typing import Dict, Optional, Union
from pytorch_lightning.profilers.advanced import AdvancedProfiler as NewAdvancedProfiler
from pytorch_lightning.utilities import rank_zero_deprecation

from pytorch_lightning.profiler.profiler import Profiler

log = logging.getLogger(__name__)


class AdvancedProfiler(Profiler):
"""This profiler uses Python's cProfiler to record more detailed information about time spent in each function
call recorded during a given action.

The output is quite verbose and you should only use this if you want very detailed reports.
"""

def __init__(
self,
dirpath: Optional[Union[str, Path]] = None,
filename: Optional[str] = None,
line_count_restriction: float = 1.0,
) -> None:
"""
Args:
dirpath: Directory path for the ``filename``. If ``dirpath`` is ``None`` but ``filename`` is present, the
``trainer.log_dir`` (from :class:`~pytorch_lightning.loggers.tensorboard.TensorBoardLogger`)
will be used.

filename: If present, filename where the profiler results will be saved instead of printing to stdout.
The ``.txt`` extension will be used automatically.

line_count_restriction: this can be used to limit the number of functions
reported for each action. either an integer (to select a count of lines),
or a decimal fraction between 0.0 and 1.0 inclusive (to select a percentage of lines)

Raises:
ValueError:
If you attempt to stop recording an action which was never started.
"""
super().__init__(dirpath=dirpath, filename=filename)
self.profiled_actions: Dict[str, cProfile.Profile] = {}
self.line_count_restriction = line_count_restriction

def start(self, action_name: str) -> None:
if action_name not in self.profiled_actions:
self.profiled_actions[action_name] = cProfile.Profile()
self.profiled_actions[action_name].enable()

def stop(self, action_name: str) -> None:
pr = self.profiled_actions.get(action_name)
if pr is None:
raise ValueError(f"Attempting to stop recording an action ({action_name}) which was never started.")
pr.disable()

def summary(self) -> str:
recorded_stats = {}
for action_name, pr in self.profiled_actions.items():
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).strip_dirs().sort_stats("cumulative")
ps.print_stats(self.line_count_restriction)
recorded_stats[action_name] = s.getvalue()
return self._stats_to_str(recorded_stats)

def teardown(self, stage: Optional[str] = None) -> None:
super().teardown(stage=stage)
self.profiled_actions = {}

def __reduce__(self):
# avoids `TypeError: cannot pickle 'cProfile.Profile' object`
return (
self.__class__,
(),
dict(dirpath=self.dirpath, filename=self.filename, line_count_restriction=self.line_count_restriction),
class AdvancedProfiler(NewAdvancedProfiler):
def __init__(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def]
rank_zero_deprecation(
"`pytorch_lightning.profiler.AdvancedProfiler` is deprecated in v1.7 and will be removed in v1.9."
" Use the equivalent `pytorch_lightning.profilers.AdvancedProfiler` class instead."
)
super().__init__(*args, **kwargs)
23 changes: 10 additions & 13 deletions src/pytorch_lightning/profiler/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@
from abc import ABC, abstractmethod
from typing import Any

from pytorch_lightning.profiler.profiler import Profiler
from pytorch_lightning.profilers.base import PassThroughProfiler as NewPassThroughProfiler
from pytorch_lightning.profilers.profiler import Profiler
from pytorch_lightning.utilities.rank_zero import rank_zero_deprecation


Expand Down Expand Up @@ -57,21 +58,17 @@ class BaseProfiler(Profiler):
Please use `Profiler` instead.
"""

def __init__(self, *args, **kwargs):
def __init__(self, *args, **kwargs): # type: ignore[no-untyped-def]
rank_zero_deprecation(
"`BaseProfiler` was deprecated in v1.6 and will be removed in v1.8. Please use `Profiler` instead."
)
super().__init__(*args, **kwargs)


class PassThroughProfiler(Profiler):
"""This class should be used when you don't want the (small) overhead of profiling.

The Trainer uses this class by default.
"""

def start(self, action_name: str) -> None:
pass

def stop(self, action_name: str) -> None:
pass
class PassThroughProfiler(NewPassThroughProfiler):
def __init__(self, *args, **kwargs) -> None: # type: ignore[no-untyped-def]
rank_zero_deprecation(
"`pytorch_lightning.profiler.PassThroughProfiler` is deprecated in v1.7 and will be removed in v1.9."
" Use the equivalent `pytorch_lightning.profilers.PassThroughProfiler` class instead."
)
super().__init__(*args, **kwargs)
Loading