Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MobileNet V2 Convertion #535

Closed
lloo099 opened this issue May 2, 2022 · 4 comments
Closed

MobileNet V2 Convertion #535

lloo099 opened this issue May 2, 2022 · 4 comments
Assignees
Labels

Comments

@lloo099
Copy link

lloo099 commented May 2, 2022

Hi, thanks for your great work on HLS art. Currently, I want to use your lib to implement the MobilnetV2 model. But some problems happened here, would you mind helping me with this? Thanks
Model from Here

  • My code:
from yaml import load_all
import tensorflow as tf
from tensorflow import keras
from tensorflow import Tensor
print("Tensorflow version is ", tf.__version__)
print('Keras version      : ',keras.__version__)
import numpy as np
import os, sys
from tensorflow.keras.models import Model, load_model
import hls4ml
from qkeras.utils import _add_supported_quantized_objects
from sklearn.metrics import accuracy_score

co = {}
_add_supported_quantized_objects(co)

model = load_model('mobilenetv2.h5', custom_objects=co)



hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')

hls_config['Model']['Precision'] = 'ap_fixed<16,6>'
hls_config['Model']['ReuseFactor'] = 1

for Layer in hls_config['LayerName'].keys():
    hls_config['LayerName'][Layer]['Strategy'] = 'Latency'
    hls_config['LayerName'][Layer]['ReuseFactor'] = 1
#If you want best numerical performance for high-accuray models, while the default latency strategy is faster but numerically more unstable
#hls_config['LayerName']['softmax']['Strategy'] = 'Stable'

hls_model = hls4ml.converters.convert_from_keras_model(model,
                                                       hls_config=hls_config,
                                                       io_type='io_stream',
                                                       backend='Vivado',
                                                       output_dir='tmpp/',
                                                       part='xczu7ev-ffvc1156-2-e')


hls_model.compile()

os.environ['PATH'] = '/tools/Xilinx/Vivado/2019.1/bin:' + os.environ['PATH']
hls_model.build(csim=False, synth=False, vsynth=False)

Y_pred = hls_model.predict(np.ascontiguousarray(X_test))
y_pred = np.argmax(Y_pred, axis = 1)
y_actual = np.argmax(Y_test, axis = 1)

accuracy = accuracy_score(y_actual, y_pred)

print("Accuracy: ", accuracy)

Tensorflow version is  2.8.0
Keras version      :  2.8.0
2022-05-02 10:57:51.696013: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-02 10:57:51.699062: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:
2022-05-02 10:57:51.699079: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2022-05-02 10:57:51.699208: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Interpreting Model
Topology:
Layer name: input_1, layer type: Input
Layer name: Conv1_pad, layer type: ZeroPadding2D
Layer name: Conv1, layer type: Conv2D
  -> Activation (linear), layer name: Conv1
Layer name: bn_Conv1, layer type: BatchNormalization
Layer name: Conv1_relu, layer type: ReLU
Layer name: expanded_conv_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: expanded_conv_depthwise
Layer name: expanded_conv_depthwise_BN, layer type: BatchNormalization
Layer name: expanded_conv_depthwise_relu, layer type: ReLU
Layer name: expanded_conv_project, layer type: Conv2D
  -> Activation (linear), layer name: expanded_conv_project
Layer name: expanded_conv_project_BN, layer type: BatchNormalization
Layer name: block_1_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_1_expand
Layer name: block_1_expand_BN, layer type: BatchNormalization
Layer name: block_1_expand_relu, layer type: ReLU
Layer name: block_1_pad, layer type: ZeroPadding2D
Layer name: block_1_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_1_depthwise
Layer name: block_1_depthwise_BN, layer type: BatchNormalization
Layer name: block_1_depthwise_relu, layer type: ReLU
Layer name: block_1_project, layer type: Conv2D
  -> Activation (linear), layer name: block_1_project
Layer name: block_1_project_BN, layer type: BatchNormalization
Layer name: block_2_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_2_expand
Layer name: block_2_expand_BN, layer type: BatchNormalization
Layer name: block_2_expand_relu, layer type: ReLU
Layer name: block_2_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_2_depthwise
Layer name: block_2_depthwise_BN, layer type: BatchNormalization
Layer name: block_2_depthwise_relu, layer type: ReLU
Layer name: block_2_project, layer type: Conv2D
  -> Activation (linear), layer name: block_2_project
Layer name: block_2_project_BN, layer type: BatchNormalization
Layer name: block_2_add, layer type: Add
Layer name: block_3_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_3_expand
Layer name: block_3_expand_BN, layer type: BatchNormalization
Layer name: block_3_expand_relu, layer type: ReLU
Layer name: block_3_pad, layer type: ZeroPadding2D
Layer name: block_3_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_3_depthwise
Layer name: block_3_depthwise_BN, layer type: BatchNormalization
Layer name: block_3_depthwise_relu, layer type: ReLU
Layer name: block_3_project, layer type: Conv2D
  -> Activation (linear), layer name: block_3_project
Layer name: block_3_project_BN, layer type: BatchNormalization
Layer name: block_4_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_4_expand
Layer name: block_4_expand_BN, layer type: BatchNormalization
Layer name: block_4_expand_relu, layer type: ReLU
Layer name: block_4_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_4_depthwise
Layer name: block_4_depthwise_BN, layer type: BatchNormalization
Layer name: block_4_depthwise_relu, layer type: ReLU
Layer name: block_4_project, layer type: Conv2D
  -> Activation (linear), layer name: block_4_project
Layer name: block_4_project_BN, layer type: BatchNormalization
Layer name: block_4_add, layer type: Add
Layer name: block_5_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_5_expand
Layer name: block_5_expand_BN, layer type: BatchNormalization
Layer name: block_5_expand_relu, layer type: ReLU
Layer name: block_5_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_5_depthwise
Layer name: block_5_depthwise_BN, layer type: BatchNormalization
Layer name: block_5_depthwise_relu, layer type: ReLU
Layer name: block_5_project, layer type: Conv2D
  -> Activation (linear), layer name: block_5_project
Layer name: block_5_project_BN, layer type: BatchNormalization
Layer name: block_5_add, layer type: Add
Layer name: block_6_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_6_expand
Layer name: block_6_expand_BN, layer type: BatchNormalization
Layer name: block_6_expand_relu, layer type: ReLU
Layer name: block_6_pad, layer type: ZeroPadding2D
Layer name: block_6_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_6_depthwise
Layer name: block_6_depthwise_BN, layer type: BatchNormalization
Layer name: block_6_depthwise_relu, layer type: ReLU
Layer name: block_6_project, layer type: Conv2D
  -> Activation (linear), layer name: block_6_project
Layer name: block_6_project_BN, layer type: BatchNormalization
Layer name: block_7_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_7_expand
Layer name: block_7_expand_BN, layer type: BatchNormalization
Layer name: block_7_expand_relu, layer type: ReLU
Layer name: block_7_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_7_depthwise
Layer name: block_7_depthwise_BN, layer type: BatchNormalization
Layer name: block_7_depthwise_relu, layer type: ReLU
Layer name: block_7_project, layer type: Conv2D
  -> Activation (linear), layer name: block_7_project
Layer name: block_7_project_BN, layer type: BatchNormalization
Layer name: block_7_add, layer type: Add
Layer name: block_8_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_8_expand
Layer name: block_8_expand_BN, layer type: BatchNormalization
Layer name: block_8_expand_relu, layer type: ReLU
Layer name: block_8_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_8_depthwise
Layer name: block_8_depthwise_BN, layer type: BatchNormalization
Layer name: block_8_depthwise_relu, layer type: ReLU
Layer name: block_8_project, layer type: Conv2D
  -> Activation (linear), layer name: block_8_project
Layer name: block_8_project_BN, layer type: BatchNormalization
Layer name: block_8_add, layer type: Add
Layer name: block_9_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_9_expand
Layer name: block_9_expand_BN, layer type: BatchNormalization
Layer name: block_9_expand_relu, layer type: ReLU
Layer name: block_9_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_9_depthwise
Layer name: block_9_depthwise_BN, layer type: BatchNormalization
Layer name: block_9_depthwise_relu, layer type: ReLU
Layer name: block_9_project, layer type: Conv2D
  -> Activation (linear), layer name: block_9_project
Layer name: block_9_project_BN, layer type: BatchNormalization
Layer name: block_9_add, layer type: Add
Layer name: block_10_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_10_expand
Layer name: block_10_expand_BN, layer type: BatchNormalization
Layer name: block_10_expand_relu, layer type: ReLU
Layer name: block_10_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_10_depthwise
Layer name: block_10_depthwise_BN, layer type: BatchNormalization
Layer name: block_10_depthwise_relu, layer type: ReLU
Layer name: block_10_project, layer type: Conv2D
  -> Activation (linear), layer name: block_10_project
Layer name: block_10_project_BN, layer type: BatchNormalization
Layer name: block_11_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_11_expand
Layer name: block_11_expand_BN, layer type: BatchNormalization
Layer name: block_11_expand_relu, layer type: ReLU
Layer name: block_11_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_11_depthwise
Layer name: block_11_depthwise_BN, layer type: BatchNormalization
Layer name: block_11_depthwise_relu, layer type: ReLU
Layer name: block_11_project, layer type: Conv2D
  -> Activation (linear), layer name: block_11_project
Layer name: block_11_project_BN, layer type: BatchNormalization
Layer name: block_11_add, layer type: Add
Layer name: block_12_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_12_expand
Layer name: block_12_expand_BN, layer type: BatchNormalization
Layer name: block_12_expand_relu, layer type: ReLU
Layer name: block_12_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_12_depthwise
Layer name: block_12_depthwise_BN, layer type: BatchNormalization
Layer name: block_12_depthwise_relu, layer type: ReLU
Layer name: block_12_project, layer type: Conv2D
  -> Activation (linear), layer name: block_12_project
Layer name: block_12_project_BN, layer type: BatchNormalization
Layer name: block_12_add, layer type: Add
Layer name: block_13_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_13_expand
Layer name: block_13_expand_BN, layer type: BatchNormalization
Layer name: block_13_expand_relu, layer type: ReLU
Layer name: block_13_pad, layer type: ZeroPadding2D
Layer name: block_13_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_13_depthwise
Layer name: block_13_depthwise_BN, layer type: BatchNormalization
Layer name: block_13_depthwise_relu, layer type: ReLU
Layer name: block_13_project, layer type: Conv2D
  -> Activation (linear), layer name: block_13_project
Layer name: block_13_project_BN, layer type: BatchNormalization
Layer name: block_14_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_14_expand
Layer name: block_14_expand_BN, layer type: BatchNormalization
Layer name: block_14_expand_relu, layer type: ReLU
Layer name: block_14_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_14_depthwise
Layer name: block_14_depthwise_BN, layer type: BatchNormalization
Layer name: block_14_depthwise_relu, layer type: ReLU
Layer name: block_14_project, layer type: Conv2D
  -> Activation (linear), layer name: block_14_project
Layer name: block_14_project_BN, layer type: BatchNormalization
Layer name: block_14_add, layer type: Add
Layer name: block_15_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_15_expand
Layer name: block_15_expand_BN, layer type: BatchNormalization
Layer name: block_15_expand_relu, layer type: ReLU
Layer name: block_15_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_15_depthwise
Layer name: block_15_depthwise_BN, layer type: BatchNormalization
Layer name: block_15_depthwise_relu, layer type: ReLU
Layer name: block_15_project, layer type: Conv2D
  -> Activation (linear), layer name: block_15_project
Layer name: block_15_project_BN, layer type: BatchNormalization
Layer name: block_15_add, layer type: Add
Layer name: block_16_expand, layer type: Conv2D
  -> Activation (linear), layer name: block_16_expand
Layer name: block_16_expand_BN, layer type: BatchNormalization
Layer name: block_16_expand_relu, layer type: ReLU
Layer name: block_16_depthwise, layer type: DepthwiseConv2D
  -> Activation (linear), layer name: block_16_depthwise
Layer name: block_16_depthwise_BN, layer type: BatchNormalization
Layer name: block_16_depthwise_relu, layer type: ReLU
Layer name: block_16_project, layer type: Conv2D
  -> Activation (linear), layer name: block_16_project
Layer name: block_16_project_BN, layer type: BatchNormalization
Layer name: Conv_1, layer type: Conv2D
  -> Activation (linear), layer name: Conv_1
Layer name: Conv_1_bn, layer type: BatchNormalization
Layer name: out_relu, layer type: ReLU
Layer name: global_average_pooling2d, layer type: GlobalAveragePooling2D
Layer name: dense, layer type: Dense
  -> Activation (linear), layer name: dense
Layer name: batch_normalization, layer type: BatchNormalization
Layer name: activation, layer type: Activation
Layer name: dense_1, layer type: Dense
  -> Activation (softmax), layer name: dense_1
Interpreting Model
Topology:
Layer name: input_1, layer type: InputLayer, input shapes: [[None, None, None, 3]], output shape: [None, None, None, 3]
Traceback (most recent call last):
  File "example.py", line 32, in <module>
    hls_model = hls4ml.converters.convert_from_keras_model(model,
  File "/home/jjcc/Desktop/2022_project/hls4ml/hls4ml/converters/__init__.py", line 217, in convert_from_keras_model
    return keras_to_hls(config)
  File "/home/jjcc/Desktop/2022_project/hls4ml/hls4ml/converters/keras_to_hls.py", line 309, in keras_to_hls
    layer, output_shape = layer_handlers[keras_class](keras_layer, input_names, input_shapes, reader, config)
  File "/home/jjcc/Desktop/2022_project/hls4ml/hls4ml/converters/keras/reshaping.py", line 80, in parse_zeropadding2d_layer
    layer['pad_top'] + input_shapes[0][1] + layer['pad_bottom'], # Height
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

@wilfredkisku
Copy link

What I am ending up with is an Unsupported layer type error.

Interpreting Model
Topology:
Layer name: input_1, layer type: Input

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [5], in <cell line: 21>()
     15 _add_supported_quantized_objects(co)
     17 model = load_model('mobilenetv2.h5', custom_objects=co)
---> 21 hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')
     23 hls_config['Model']['Precision'] = 'ap_fixed<16,6>'
     24 hls_config['Model']['ReuseFactor'] = 1

File ~/Vivado/hls4ml-tutorial/venv_hls4ml/lib/python3.8/site-packages/hls4ml/utils/config.py:136, in config_from_keras_model(model, granularity, default_precision, default_reuse_factor)
    134 for keras_layer in keras_layer_config:
    135     if keras_layer['class_name'] not in supported_layers:
--> 136         raise Exception('ERROR: Unsupported layer type: {}'.format(keras_layer['class_name']))
    137     if keras_layer['class_name'] in skip_layers:
    138         continue

Exception: ERROR: Unsupported layer type: ZeroPadding2D

Any headway? Also, I wanted to ask how do you handle issue of over utilization of resources for model that are synthesized for ZU104(xczu7ev-ffvc1156-2-e) that you have been using.

@jmduarte jmduarte self-assigned this May 11, 2022
@jmduarte
Copy link
Member

Hi @lloo099, what version of hls4ml are you using? Which one of these is None?

layer['pad_top'] + input_shapes[0][1] + layer['pad_bottom']

I'd also warn that this model is quite large and may need to be compressed quite a bit to be usable. It may be better for debugging purposes if you could reproduce the error with a minimal example (say 1 or 2 layers).

Hi @wilfredkisku what version of hls4ml are you using? I believe ZeroPadding2D is supported since #480

@wilfredkisku
Copy link

wilfredkisku commented May 11, 2022

Thank you @jmduarte for clarification, I am using the current version (v0.6.0). I also wanted to ask regarding the upper bound for a model to be usable for a board such as ZCU104.

Thank for the help.

@lloo099
Copy link
Author

lloo099 commented May 11, 2022

Here

Hi @jmduarte @wilfredkisku , thanks for your response. I am using hls4ml 0.6.0. I just direct convert MobilnetV2 this model on hls4ml but not successfully. I think it could be better for a fix precision of 8-bit.

@fastmachinelearning fastmachinelearning locked and limited conversation to collaborators Mar 16, 2023
@jmduarte jmduarte converted this issue into discussion #732 Mar 16, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
Projects
None yet
Development

No branches or pull requests

3 participants