Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Commit

Permalink
Merge branch 'master' into feature/add-deepspeed-finetuning-strategies
Browse files Browse the repository at this point in the history
  • Loading branch information
krshrimali authored Aug 31, 2022
2 parents dfe2b39 + 0253d71 commit acf3ae3
Show file tree
Hide file tree
Showing 21 changed files with 52 additions and 1,143 deletions.
16 changes: 7 additions & 9 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,18 @@
# owners
/.github/CODEOWNERS @williamfalcon
# main
/README.md @edenlightning @ethanwharris
/README.md @ethanwharris @krshrimali
# installation
/setup.py @borda @ethanwharris
/__about__.py @borda @ethanwharris
/__init__.py @borda @ethanwharris
/setup.py @borda @ethanwharris @krshrimali
/__about__.py @borda @ethanwharris @krshrimali
/__init__.py @borda @ethanwharris @krshrimali

# CI/CD
/.github/workflows/ @borda @ethanwharris
/.github/workflows/ @borda @ethanwharris @krshrimali
# configs in root
/*.yml @borda @ethanwharris
/*.yml @borda @ethanwharris @krshrimali

# Docs
/docs/ @edenlightning @ethanwharris
/.github/*.md @edenlightning @ethanwharris
/.github/ISSUE_TEMPLATE/*.md @edenlightning @ethanwharris
/.github/ISSUE_TEMPLATE/*.md @borda @ethanwharris @krshrimali
/docs/source/conf.py @borda @ethanwharris
/flash/core/integrations/labelstudio @KonstantinKorotaev @niklub
1 change: 0 additions & 1 deletion .github/labeler.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ documentation:

examples:
- flash_examples/**/*
- flash_notebooks/**/*

data:
- flash/core/data/**/*
Expand Down
69 changes: 0 additions & 69 deletions .github/workflows/ci-notebook.yml

This file was deleted.

2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -144,8 +144,6 @@ titanic.csv
data_folder
*.pt
*.zip
flash_notebooks/*.py
flash_notebooks/data
/data
MNIST*
titanic
Expand Down
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Added fine tuning strategies for DeepSpeed (with parameter loading and storing omitted) ([#1377](https://github.com/Lightning-AI/lightning-flash/pull/1377))

- Added `torchvision` as a requirement to `datatype_audio.txt` as it's used for Audio Classification ([#1425](https://github.com/Lightning-AI/lightning-flash/pull/1425))

- Added `figsize` and `limit_nb_samples` for showing batch images ([#1381](https://github.com/Lightning-AI/lightning-flash/pull/1381))

- Added support for `from_lists` for Tabular Classification and Regression ([#1337](https://github.com/PyTorchLightning/lightning-flash/pull/1337))
Expand Down Expand Up @@ -48,6 +50,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Fixed

- Fixed when suitable error not being raised for image segmentation (kornia) ([#1425](https://github.com/Lightning-AI/lightning-flash/pull/1425)).

- Fixed the script of integrating `lightning-flash` with `learn2learn` ([#1376](https://github.com/Lightning-AI/lightning-flash/pull/1383))

- Fixed JIT tracing tests where the model class was not attached to the `Trainer` class ([#1410](https://github.com/Lightning-AI/lightning-flash/pull/1410))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ model.serve()
or make predictions from raw data directly.

```py
trainer = Trainer(accelerator='ddp', gpus=2)
trainer = Trainer(strategy='ddp', accelerator="gpu", gpus=2)
dm = SemanticSegmentationData.from_folders(predict_folder="data/CameraRGB")
predictions = trainer.predict(model, dm)
```
Expand Down
2 changes: 1 addition & 1 deletion docs/source/_templates/theme_variables.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
'docs': 'https://lightning-flash.readthedocs.io',
'twitter': 'https://twitter.com/PyTorchLightnin',
'discuss': 'https://pytorch-lightning.slack.com',
'tutorials': 'https://github.com/PyTorchLightning/lightning-flash/tree/master/flash_notebooks',
'tutorials': 'https://github.com/Lightning-AI/tutorials',
'previous_pytorch_versions': 'https://lightning-flash.readthedocs.io/en/stable',
'home': 'https://lightning-flash.readthedocs.io',
'get_started': 'https://lightning-flash.readthedocs.io/en/latest/quickstart.html',
Expand Down
4 changes: 2 additions & 2 deletions docs/source/governance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ Flash Governance | Persons of interest
Leads
-----
- Ethan Harris (`ethanwharris <https://github.com/ethanwharris>`_)
- Kushashwa Ravi Shrimali (`krshrimali <https://github.com/krshrimali>`_)
- Thomas Chaton (`tchaton <https://github.com/tchaton>`_)
- William Falcon (`williamFalcon <https://github.com/williamFalcon>`_)

Core Maintainers
----------------
- William Falcon (`williamFalcon <https://github.com/williamFalcon>`_)
- Jirka Borovec (`Borda <https://github.com/Borda>`_)
- Kushashwa Ravi Shrimali (`krshrimali` <https://github.com/krshrimali>`_)
- Kaushik Bokka (`kaushikb11 <https://github.com/kaushikb11>`_)
- Justus Schock (`justusschock <https://github.com/justusschock>`_)
- Akihiro Nitta (`akihironitta <https://github.com/akihironitta>`_)
Expand Down
3 changes: 2 additions & 1 deletion flash/audio/classification/input_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from flash.core.data.io.input import DataKeys
from flash.core.data.io.input_transform import InputTransform
from flash.core.data.transforms import ApplyToKeys
from flash.core.utilities.imports import _TORCHAUDIO_AVAILABLE, _TORCHVISION_AVAILABLE
from flash.core.utilities.imports import _TORCHAUDIO_AVAILABLE, _TORCHVISION_AVAILABLE, requires

if _TORCHVISION_AVAILABLE:
from torchvision import transforms as T
Expand Down Expand Up @@ -51,6 +51,7 @@ def train_per_sample_transform(self) -> Callable:
]
)

@requires("audio")
def per_sample_transform(self) -> Callable:
return T.Compose(
[
Expand Down
14 changes: 13 additions & 1 deletion flash/core/data/io/input.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
import os
import sys
from copy import deepcopy
from enum import Enum
from typing import Any, cast, Dict, Iterable, List, Sequence, Tuple, Union

from pytorch_lightning.utilities.enums import LightningEnum
Expand Down Expand Up @@ -171,7 +172,18 @@ def __init__(self, running_stage: RunningStage, *args: Any, **kwargs: Any) -> No

def _call_load_sample(self, sample: Any) -> Any:
# Deepcopy the sample to avoid leaks with complex data structures
return getattr(self, f"{_STAGES_PREFIX[self.running_stage]}_load_sample")(deepcopy(sample))
sample_output = getattr(self, f"{_STAGES_PREFIX[self.running_stage]}_load_sample")(deepcopy(sample))

# Change DataKeys Enum to strings
if isinstance(sample_output, dict):
output_dict = {}
for key, val in sample_output.items():
if isinstance(key, Enum) and hasattr(key, "value"):
output_dict[key.value] = val
else:
output_dict[key] = val
return output_dict
return sample_output

@staticmethod
def load_data(*args: Any, **kwargs: Any) -> Union[Sequence, Iterable]:
Expand Down
2 changes: 1 addition & 1 deletion flash/core/utilities/imports.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ class Image:
)
_SERVE_AVAILABLE = _FASTAPI_AVAILABLE and _PYDANTIC_AVAILABLE and _CYTOOLZ_AVAILABLE and _UVICORN_AVAILABLE
_POINTCLOUD_AVAILABLE = _OPEN3D_AVAILABLE and _TORCHVISION_AVAILABLE
_AUDIO_AVAILABLE = all([_TORCHAUDIO_AVAILABLE, _LIBROSA_AVAILABLE, _TRANSFORMERS_AVAILABLE])
_AUDIO_AVAILABLE = all([_TORCHAUDIO_AVAILABLE, _TORCHVISION_AVAILABLE, _LIBROSA_AVAILABLE, _TRANSFORMERS_AVAILABLE])
_GRAPH_AVAILABLE = (
_TORCH_SCATTER_AVAILABLE and _TORCH_SPARSE_AVAILABLE and _TORCH_GEOMETRIC_AVAILABLE and _NETWORKX_AVAILABLE
)
Expand Down
5 changes: 4 additions & 1 deletion flash/image/segmentation/input_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
from flash.core.data.io.input import DataKeys
from flash.core.data.io.input_transform import InputTransform
from flash.core.data.transforms import ApplyToKeys, kornia_collate, KorniaParallelTransforms
from flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE
from flash.core.utilities.imports import _KORNIA_AVAILABLE, _TORCHVISION_AVAILABLE, requires

if _KORNIA_AVAILABLE:
import kornia as K
Expand Down Expand Up @@ -47,6 +47,7 @@ class SemanticSegmentationInputTransform(InputTransform):
mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)

@requires("image")
def train_per_sample_transform(self) -> Callable:
return T.Compose(
[
Expand All @@ -61,6 +62,7 @@ def train_per_sample_transform(self) -> Callable:
]
)

@requires("image")
def per_sample_transform(self) -> Callable:
return T.Compose(
[
Expand All @@ -72,6 +74,7 @@ def per_sample_transform(self) -> Callable:
]
)

@requires("image")
def predict_per_sample_transform(self) -> Callable:
return ApplyToKeys(
DataKeys.INPUT,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,7 @@ def collate(self):
trainer = flash.Trainer(
max_epochs=1,
gpus=1,
accelerator="gpu",
precision=16,
)

Expand Down
Loading

0 comments on commit acf3ae3

Please sign in to comment.