Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WandbLogger throws error with mode="disabled" (pytorch lightning 1.7) #14021

Closed
hogru opened this issue Aug 4, 2022 · 2 comments · Fixed by #14112
Closed

WandbLogger throws error with mode="disabled" (pytorch lightning 1.7) #14021

hogru opened this issue Aug 4, 2022 · 2 comments · Fixed by #14112
Labels
bug Something isn't working logger: wandb Weights & Biases
Milestone

Comments

@hogru
Copy link

hogru commented Aug 4, 2022

🐛 Bug

After upgrading to pytorch lightning 1.7 instantiating WandbLogger with mode="disabled" throws the following error:

Traceback (most recent call last):
  File "/Users/stephan/Library/Mobile Documents/com~apple~CloudDocs/Ablage/AI Master/Courses/2022S/Master Thesis/molgen/src/playground/bug_report_model.py", line 70, in <module>
    run()
  File "/Users/stephan/Library/Mobile Documents/com~apple~CloudDocs/Ablage/AI Master/Courses/2022S/Master Thesis/molgen/src/playground/bug_report_model.py", line 48, in run
    wandb_logger = WandbLogger(mode="disabled")  # <== Added
  File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/wandb.py", line 315, in __init__
    _ = self.experiment
  File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/logger.py", line 54, in experiment
    return get_experiment() or DummyExperiment()
  File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/utilities/rank_zero.py", line 32, in wrapped_fn
    return fn(*args, **kwargs)
  File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/logger.py", line 52, in get_experiment
    return fn(self)
  File "/Users/stephan/Library/Caches/pypoetry/virtualenvs/molgen-6oMP0hTK-py3.9/lib/python3.9/site-packages/pytorch_lightning/loggers/wandb.py", line 368, in experiment
    assert isinstance(self._experiment, Run)
AssertionError

To Reproduce

import os

import torch
from torch.utils.data import DataLoader, Dataset

from pytorch_lightning import LightningModule, Trainer
from pytorch_lightning.loggers import WandbLogger  # <== Added


class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("train_loss", loss)
        return {"loss": loss}

    def validation_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("valid_loss", loss)

    def test_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("test_loss", loss)

    def configure_optimizers(self):
        return torch.optim.SGD(self.layer.parameters(), lr=0.1)


def run():
    wandb_logger = WandbLogger(mode="disabled")  # <== Added

    train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    val_data = DataLoader(RandomDataset(32, 64), batch_size=2)
    test_data = DataLoader(RandomDataset(32, 64), batch_size=2)


    model = BoringModel()
    trainer = Trainer(
        default_root_dir=os.getcwd(),
        limit_train_batches=1,
        limit_val_batches=1,
        limit_test_batches=1,
        num_sanity_val_steps=0,
        max_epochs=1,
        enable_model_summary=False,
    )
    trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
    trainer.test(model, dataloaders=test_data)


if __name__ == "__main__":
    run()

Expected behavior

No error, pass parameter to wandb (as it did until 1.6.5)

Environment

* CUDA:
	- GPU:
	- available:         False
	- version:           None
* Packages:
	- lightning:         None
	- lightning_app:     None
	- numpy:             1.23.1
	- pyTorch_debug:     False
	- pyTorch_version:   1.12.0
	- pytorch-lightning: 1.7.0
	- tqdm:              4.64.0
* System:
	- OS:                Darwin
	- architecture:
		- 64bit
		- 
	- processor:         i386
	- python:            3.9.12
	- version:           Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000

Additional context

cc @awaelchli @morganmcg1 @borisdayma @scottire @manangoel99

@hogru hogru added the needs triage Waiting to be triaged by maintainers label Aug 4, 2022
@carmocca carmocca added bug Something isn't working logger: wandb Weights & Biases and removed needs triage Waiting to be triaged by maintainers labels Aug 5, 2022
@carmocca carmocca added this to the pl:1.7.x milestone Aug 5, 2022
@carmocca
Copy link
Contributor

carmocca commented Aug 5, 2022

@gautierdag The PR #13483 seems to have caused this. Would you like to take a look?

@gautierdag
Copy link
Contributor

@carmocca Indeed the typing PR introduced this bug. I've fixed it in #14112, the _experiment can be a RunDisabled object if mode="disabled" is passed. I can write an additional test for this behavior but since we mock the wandb library during testing it feels a little redundant.

Thanks @hogru for the mock example!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working logger: wandb Weights & Biases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants