Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

collab notebook error #193

Open
barbara42 opened this issue Mar 12, 2024 · 2 comments
Open

collab notebook error #193

barbara42 opened this issue Mar 12, 2024 · 2 comments

Comments

@barbara42
Copy link

Under loading a pre-trained model

# This is an MAE model trained with pixels as targets for visualization (ViT-Large, training mask ratio=0.75)

# download checkpoint if not exist
!wget -nc https://dl.fbaipublicfiles.com/mae/visualize/mae_visualize_vit_large.pth

chkpt_dir = 'mae_visualize_vit_large.pth'
model_mae = prepare_model(chkpt_dir, 'mae_vit_large_patch16')
print('Model loaded.')

Results in the following error

--2024-03-12 18:31:48--  https://dl.fbaipublicfiles.com/mae/visualize/mae_visualize_vit_large.pth
Resolving dl.fbaipublicfiles.com (dl.fbaipublicfiles.com)... 13.35.7.50, 13.35.7.38, 13.35.7.82, ...
Connecting to dl.fbaipublicfiles.com (dl.fbaipublicfiles.com)|13.35.7.50|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1318315181 (1.2G) [binary/octet-stream]
Saving to: ‘mae_visualize_vit_large.pth’

mae_visualize_vit_l 100%[===================>]   1.23G   138MB/s    in 11s     

2024-03-12 18:31:59 (115 MB/s) - ‘mae_visualize_vit_large.pth’ saved [1318315181/1318315181]

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-4-062e15d3f32e>](https://localhost:8080/#) in <cell line: 7>()
      5 
      6 chkpt_dir = 'mae_visualize_vit_large.pth'
----> 7 model_mae = prepare_model(chkpt_dir, 'mae_vit_large_patch16')
      8 print('Model loaded.')

7 frames
[<ipython-input-2-4a1bff3e6bef>](https://localhost:8080/#) in prepare_model(chkpt_dir, arch)
     14 def prepare_model(chkpt_dir, arch='mae_vit_large_patch16'):
     15     # build model
---> 16     model = getattr(models_mae, arch)()
     17     # load model
     18     checkpoint = torch.load(chkpt_dir, map_location='cpu')

[/content/./mae/models_mae.py](https://localhost:8080/#) in mae_vit_large_patch16_dec512d8b(**kwargs)
    230 
    231 def mae_vit_large_patch16_dec512d8b(**kwargs):
--> 232     model = MaskedAutoencoderViT(
    233         patch_size=16, embed_dim=1024, depth=24, num_heads=16,
    234         decoder_embed_dim=512, decoder_depth=8, decoder_num_heads=16,

[/content/./mae/models_mae.py](https://localhost:8080/#) in __init__(self, img_size, patch_size, in_chans, embed_dim, depth, num_heads, decoder_embed_dim, decoder_depth, decoder_num_heads, mlp_ratio, norm_layer, norm_pix_loss)
     61         self.norm_pix_loss = norm_pix_loss
     62 
---> 63         self.initialize_weights()
     64 
     65     def initialize_weights(self):

[/content/./mae/models_mae.py](https://localhost:8080/#) in initialize_weights(self)
     66         # initialization
     67         # initialize (and freeze) pos_embed by sin-cos embedding
---> 68         pos_embed = get_2d_sincos_pos_embed(self.pos_embed.shape[-1], int(self.patch_embed.num_patches**.5), cls_token=True)
     69         self.pos_embed.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0))
     70 

[/content/./mae/util/pos_embed.py](https://localhost:8080/#) in get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token)
     30 
     31     grid = grid.reshape([2, 1, grid_size, grid_size])
---> 32     pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
     33     if cls_token:
     34         pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0)

[/content/./mae/util/pos_embed.py](https://localhost:8080/#) in get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
     40 
     41     # use half of dimensions to encode grid_h
---> 42     emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0])  # (H*W, D/2)
     43     emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1])  # (H*W, D/2)
     44 

[/content/./mae/util/pos_embed.py](https://localhost:8080/#) in get_1d_sincos_pos_embed_from_grid(embed_dim, pos)
     54     """
     55     assert embed_dim % 2 == 0
---> 56     omega = np.arange(embed_dim // 2, dtype=np.float)
     57     omega /= embed_dim / 2.
     58     omega = 1. / 10000**omega  # (D/2,)

[/usr/local/lib/python3.10/dist-packages/numpy/__init__.py](https://localhost:8080/#) in __getattr__(attr)
    317 
    318         if attr in __former_attrs__:
--> 319             raise AttributeError(__former_attrs__[attr])
    320 
    321         if attr == 'testing':

AttributeError: module 'numpy' has no attribute 'float'.
`np.float` was a deprecated alias for the builtin `float`. To avoid this error in existing code, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
@DannaShavit
Copy link

I had the same issue, and this solved it for me:
Add these lines before your imports:
%load_ext autoreload
%autoreload 2

Taken from here:
https://stackoverflow.com/questions/75397364/google-colab-not-detecting-changes-in-py-files

@hzy-del
Copy link

hzy-del commented Mar 20, 2024

!pip install numpy==1.23.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants