Skip to content

Commit

Permalink
Remove the HivemindStrategy (#16407)
Browse files Browse the repository at this point in the history
Remove the collaborative strategy
  • Loading branch information
carmocca committed Jan 19, 2023
1 parent b60c75e commit 3b7b82e
Show file tree
Hide file tree
Showing 15 changed files with 0 additions and 942 deletions.
1 change: 0 additions & 1 deletion docs/source-pytorch/api_references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,6 @@ strategies
DDPStrategy
DataParallelStrategy
DeepSpeedStrategy
HivemindStrategy
HPUParallelStrategy
IPUStrategy
ParallelStrategy
Expand Down
7 changes: 0 additions & 7 deletions docs/source-pytorch/common_usecases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,13 +123,6 @@ Customize and extend Lightning for things like custom hardware or distributed st
:button_link: clouds/cloud_training.html
:height: 100

.. displayitem::
:header: Train on multiple machines over the internet
:description: Train on local machines or unreliable GPUs across the internet.
:col_css: col-md-12
:button_link: strategies/hivemind
:height: 100

.. displayitem::
:header: Train on single or multiple GPUs
:description: Train models faster with GPUs.
Expand Down
3 changes: 0 additions & 3 deletions docs/source-pytorch/extensions/strategy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,6 @@ The below table lists all relevant strategies available in Lightning with their
* - bagua
- :class:`~pytorch_lightning.strategies.BaguaStrategy`
- Strategy for training using the Bagua library, with advanced distributed training algorithms and system optimizations. :ref:`Learn more. <accelerators/gpu_intermediate:Bagua>`
* - collaborative
- :class:`~pytorch_lightning.strategies.HivemindStrategy`
- Strategy for training collaboratively on local machines or unreliable GPUs across the internet. :ref:`Learn more. <strategies/hivemind:Training on unreliable mixed GPUs across the internet>`
* - colossalai
- :class:`~pytorch_lightning.strategies.ColossalAIStrategy`
- Colossal-AI provides a collection of parallel components for you. It aims to support you to write your distributed deep learning models just like how you write your model on your laptop. `Learn more. <https://www.colossalai.org/>`__
Expand Down
2 changes: 0 additions & 2 deletions docs/source-pytorch/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,6 @@ Current Lightning Users
clouds/cluster
Save and load model progress <common/checkpointing>
Save memory with half-precision <common/precision>
Training over the internet <strategies/hivemind>
advanced/model_parallel
clouds/cloud_training
Train on single or multiple GPUs <accelerators/gpu>
Expand Down Expand Up @@ -246,7 +245,6 @@ Current Lightning Users
Metrics <https://torchmetrics.readthedocs.io/en/stable/>
Model <model/build_model.rst>
Model Parallel <advanced/model_parallel>
Collaborative Training <strategies/hivemind>
Plugins <extensions/plugins>
Progress bar <common/progress_bar>
Production <deploy/production_advanced>
Expand Down
44 changes: 0 additions & 44 deletions docs/source-pytorch/strategies/hivemind.rst

This file was deleted.

43 changes: 0 additions & 43 deletions docs/source-pytorch/strategies/hivemind_basic.rst

This file was deleted.

87 changes: 0 additions & 87 deletions docs/source-pytorch/strategies/hivemind_expert.rst

This file was deleted.

99 changes: 0 additions & 99 deletions docs/source-pytorch/strategies/hivemind_intermediate.rst

This file was deleted.

1 change: 0 additions & 1 deletion requirements/pytorch/strategies.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,3 @@
# colossalai>=0.1.10 # TODO: uncomment when there's a stable version released
fairscale>=0.4.5, <0.4.13
deepspeed>=0.6.0, <=0.7.0
hivemind==1.1.5; sys_platform == 'linux'
1 change: 0 additions & 1 deletion src/pytorch_lightning/strategies/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
from pytorch_lightning.strategies.dp import DataParallelStrategy # noqa: F401
from pytorch_lightning.strategies.fully_sharded import DDPFullyShardedStrategy # noqa: F401
from pytorch_lightning.strategies.fully_sharded_native import DDPFullyShardedNativeStrategy # noqa: F401
from pytorch_lightning.strategies.hivemind import HivemindStrategy # noqa: F401
from pytorch_lightning.strategies.hpu_parallel import HPUParallelStrategy # noqa: F401
from pytorch_lightning.strategies.ipu import IPUStrategy # noqa: F401
from pytorch_lightning.strategies.parallel import ParallelStrategy # noqa: F401
Expand Down
Loading

0 comments on commit 3b7b82e

Please sign in to comment.