You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run the dreambooth tutorial, and when executing the dreambooth.py it raises an error related to the libraries, for example:
[NeMo W 2024-10-08 09:12:02 megatron_lm_encoder_decoder_model:78] Megatron num_microbatches_calculator not found, using Apex version.
Traceback (most recent call last):
File "/opt/NeMo/clip/convert_external_clip_to_nemo.py", line 53, in
from nemo.collections.multimodal.models.vision_language_foundation.clip.megatron_clip_models import MegatronCLIPModel
File "/usr/local/lib/python3.10/dist-packages/nemo/collections/multimodal/models/vision_language_foundation/clip/megatron_clip_models.py", line 311, in
class SiglipMHAPoolingHead(TransformerLayer):
NameError: name 'TransformerLayer' is not defined
I don't think you specify which NeMo image is compatible with this example. Two months ago it was working with the image nvcr.io/nvidia/nemo:24.02, but now it gives the error I put above.
Please could you tell me which image works with this tutorial? or which versions of the libraries are needed?
Thank you so much.
The text was updated successfully, but these errors were encountered:
Launch a container with the image (witha couple of volumes): docker run --runtime=nvidia --gpus all -it --rm
-v /mnt/shared_demos/dreambooth/nemo:/opt/NeMo
-v /mnt/shared_models/huggingface/cache/hub/models--runwayml--stable-diffusion-v1-5/snapshots/1d0c4ebf6ff58a5caecab40fa1406526bca4b5b9:/opt/models
--shm-size=8g -p 8888:8888
--ulimit memlock=-1 --ulimit stack=67108864
nvcr.io/nvidia/nemo:24.05
The one above installs the megatron library in /opt/megatron-lm. However, this is version 0.8; and if I go to the path where it's located, the contents of /megatron/core are not updated, for example, there's no folder called extensions.
Thus, it installs something that cannot be used for the dreambooth example.
There's also a section (Megatron-LM) where it is specified to run the following in roder to use it:
I'm trying to run the dreambooth tutorial, and when executing the dreambooth.py it raises an error related to the libraries, for example:
[NeMo W 2024-10-08 09:12:02 megatron_lm_encoder_decoder_model:78] Megatron num_microbatches_calculator not found, using Apex version.
Traceback (most recent call last):
File "/opt/NeMo/clip/convert_external_clip_to_nemo.py", line 53, in
from nemo.collections.multimodal.models.vision_language_foundation.clip.megatron_clip_models import MegatronCLIPModel
File "/usr/local/lib/python3.10/dist-packages/nemo/collections/multimodal/models/vision_language_foundation/clip/megatron_clip_models.py", line 311, in
class SiglipMHAPoolingHead(TransformerLayer):
NameError: name 'TransformerLayer' is not defined
I don't think you specify which NeMo image is compatible with this example. Two months ago it was working with the image nvcr.io/nvidia/nemo:24.02, but now it gives the error I put above.
Please could you tell me which image works with this tutorial? or which versions of the libraries are needed?
Thank you so much.
The text was updated successfully, but these errors were encountered: