Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] authored and lantiga committed Jul 8, 2024
1 parent cc457fe commit cf34867
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 22 deletions.
20 changes: 10 additions & 10 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,16 @@ Brief description of all our automation tools used for boosting development perf

## Unit and Integration Testing

| workflow file | action | accelerator |
| -------------------------------------- | ----------------------------------------------------------------------------------------- | ----------- |
| .github/workflows/ci-tests-fabric.yml | Run all tests except for accelerator-specific and standalone. | CPU |
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific and standalone. | CPU |
| .github/workflows/ci-tests-data.yml | Run unit and integration tests with data pipelining. | CPU |
| .azure-pipelines/gpu-tests-fabric.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
| .azure-pipelines/gpu-tests-pytorch.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
| .azure-pipelines/gpu-benchmarks.yml | Run speed/memory benchmarks for parity with vanila PyTorch. | GPU |
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific, standalone and slow tests. | CPU |
| .github/workflows/tpu-tests.yml | Run only TPU-specific tests. Requires that the PR title contains '\[TPU\]' | TPU |
| workflow file | action | accelerator |
| -------------------------------------- | -------------------------------------------------------------------------- | ----------- |
| .github/workflows/ci-tests-fabric.yml | Run all tests except for accelerator-specific and standalone. | CPU |
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific and standalone. | CPU |
| .github/workflows/ci-tests-data.yml | Run unit and integration tests with data pipelining. | CPU |
| .azure-pipelines/gpu-tests-fabric.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
| .azure-pipelines/gpu-tests-pytorch.yml | Run only GPU-specific tests, standalone\*, and examples. | GPU |
| .azure-pipelines/gpu-benchmarks.yml | Run speed/memory benchmarks for parity with vanila PyTorch. | GPU |
| .github/workflows/ci-tests-pytorch.yml | Run all tests except for accelerator-specific, standalone and slow tests. | CPU |
| .github/workflows/tpu-tests.yml | Run only TPU-specific tests. Requires that the PR title contains '\[TPU\]' | TPU |

\* Each standalone test needs to be run in separate processes to avoid unwanted interactions between test cases.

Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/ci-pkg-extend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ defaults:
shell: bash

jobs:

import-pkg:
runs-on: ${{ matrix.os }}
strategy:
Expand All @@ -50,4 +49,3 @@ jobs:
- name: Try importing
run: from lightning.${{ matrix.pkg-name }} import *
shell: python

3 changes: 1 addition & 2 deletions examples/fabric/tensor_parallel/train.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
import lightning as L
import torch
import torch.nn.functional as F
from data import RandomTokenDataset
from lightning.fabric.strategies import ModelParallelStrategy
from model import ModelArgs, Transformer
from parallelism import parallelize
from torch.distributed.tensor.parallel import loss_parallel
from torch.utils.data import DataLoader

from data import RandomTokenDataset


def train():
strategy = ModelParallelStrategy(
Expand Down
3 changes: 1 addition & 2 deletions examples/pytorch/tensor_parallel/train.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
import lightning as L
import torch
import torch.nn.functional as F
from data import RandomTokenDataset
from lightning.pytorch.strategies import ModelParallelStrategy
from model import ModelArgs, Transformer
from parallelism import parallelize
from torch.distributed.tensor.parallel import loss_parallel
from torch.utils.data import DataLoader

from data import RandomTokenDataset


class Llama3(L.LightningModule):
def __init__(self):
Expand Down
12 changes: 6 additions & 6 deletions src/lightning/app/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@
# Enable resolution at least for lower data namespace
sys.modules["lightning.app"] = lightning_app

from lightning_app.core.app import LightningApp # noqa: E402
from lightning_app.core.flow import LightningFlow # noqa: E402
from lightning_app.core.work import LightningWork # noqa: E402
from lightning_app.plugin.plugin import LightningPlugin # noqa: E402
from lightning_app.utilities.packaging.build_config import BuildConfig # noqa: E402
from lightning_app.utilities.packaging.cloud_compute import CloudCompute # noqa: E402
from lightning_app.core.app import LightningApp
from lightning_app.core.flow import LightningFlow
from lightning_app.core.work import LightningWork
from lightning_app.plugin.plugin import LightningPlugin
from lightning_app.utilities.packaging.build_config import BuildConfig
from lightning_app.utilities.packaging.cloud_compute import CloudCompute

if module_available("lightning_app.components.demo"):
from lightning.app.components import demo # noqa: F401
Expand Down

0 comments on commit cf34867

Please sign in to comment.