You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I tried a test about compiling unet(torch.float16), which is the part of StableDiffusionXLPipeline in Inferentia2.8xlarge and it failed.
When the latent size of unet is (64, 64), it did not failed.
However, when the latent size of unet is (128, 128), it failed.
[Error message]
(aws_neuron_venv_pytorch) [ec2-user@ip-172-31-32-56 ~]$ python compile_neuron_sdxl_base_fp16.py
/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.33it/s]
/home/ec2-user/compile_neuron_sdxl_base_fp16.py:116: FutureWarning: Accessing config attribute `in_channels` directly via 'UNet2DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet2DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
self.in_channels = unetwrap.unet.in_channels
.....................................
Compiler status PASS
783.619141103
.....................root = neuronxcc/starfish/penguin/targets/codegen/BirCodeGenLoop.py
root = neuronxcc/starfish/penguin/targets/codegen
root = neuronxcc/starfish/penguin/targets
root = neuronxcc/starfish/penguin
root = neuronxcc/starfish
[TEN404] (_add.23504) Internal tensorizer error: BirCodeGenLoop:too many partition dims! {{0,+,6}[4],+,48}[16] - Please open a support ticket at https://github.com/aws-neuron/aws-neuron-sdk/issues/new. You may also be able to obtain more information using the 'XLA_IR_DEBUG' and 'XLA_HLO_DEBUG' environment variables.
Traceback (most recent call last):
File "/home/ec2-user/compile_neuron_sdxl_base_fp16.py", line 183, in<module>
unet_neuron = torch_neuronx.trace(
File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/torch_neuronx/xla_impl/trace.py", line 574, in trace
neff_filename, metaneff, flattener, packer, weights = _trace(
File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/torch_neuronx/xla_impl/trace.py", line 639, in _trace
neff_artifacts = generate_neff(
File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/torch_neuronx/xla_impl/trace.py", line 492, in generate_neff
neff_filename = hlo_compile(
File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/torch_neuronx/xla_impl/trace.py", line 394, in hlo_compile
raise RuntimeError(f"neuronx-cc failed with {status}")
RuntimeError: neuronx-cc failed with 70
Environment
AMI : Deep Learning AMI Neuron (Amazon Linux 2023)
Hi, I tried a test about compiling unet(torch.float16), which is the part of StableDiffusionXLPipeline in Inferentia2.8xlarge and it failed.
When the latent size of unet is (64, 64), it did not failed.
However, when the latent size of unet is (128, 128), it failed.
[Error message]
Environment
AMI : Deep Learning AMI Neuron (Amazon Linux 2023)
Python Script
Most of code is same from this code
I was able to compile the unet for torch.float32, but not torch.float16 and when the latent size is (128, 128)
The text was updated successfully, but these errors were encountered: