Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenVino2019 handeling #545

Merged
merged 1 commit into from
Jul 11, 2019
Merged

OpenVino2019 handeling #545

merged 1 commit into from
Jul 11, 2019

Conversation

benhoff
Copy link
Contributor

@benhoff benhoff commented Jul 3, 2019

This pull request is for conversation only, it's not meant to be merged in its current state.

In OpenVino 2019R1 the tensorflow API changed to have two inputs instead of a single input.

MO 2018R5 generates model IRs with these inputs:

network.inputs = ['image_tensor']

MO 2019R1 IRs:

network.inputs = ['image_tensor', 'image_info']

This breaks the auto annotation apps way of handling inputs.

The attached code is a brute force way of handling the change. Likely we'd want to all the user to drop in some custom code, similar to how the interpret code is currently handled that allows them to make the appropriate changes to the model.

@benhoff benhoff mentioned this pull request Jul 3, 2019
@nmanovic
Copy link
Contributor

nmanovic commented Jul 3, 2019

@azhavoro , could you please look at the change? Also need to clarify with OpenVINO team how to support different versions of the framework in the best way.

@nmanovic nmanovic requested a review from azhavoro July 3, 2019 15:17
@alalek
Copy link

alalek commented Jul 3, 2019

related issue: openvinotoolkit/openvino#128

one more attempt to workaround this
    network = make_network('{}.xml'.format(MODEL_PATH), '{}.bin'.format(MODEL_PATH))
    network_inputs = network.inputs
    print('network.inputs = ' + str(list(network.inputs)))
    print('network.outputs = ' + str(list(network.outputs)))
    info_blob_name = 'image_info'
    input_blob_name = 'image_tensor'
    output_blob_name = next(iter(network.outputs))
    executable_network = plugin.load(network=network)
 
    del network
 
    try:
        assert input_blob_name in network_inputs
        #image_np = np.zeros([3, 600, 600])
        … load image …
        inputs={input_blob_name: image_np[np.newaxis, ...]}
        if info_blob_name in network_inputs:
            info = np.zeros([1, 3])
            info[0, 0] = 600
            info[0, 1] = 600
            info[0, 2] = 1
            inputs[info_blob_name] = info
        prediction = executable_network.infer(inputs)[output_blob_name][0][0]
        print(prediction.shape)
    …

@benhoff
Copy link
Contributor Author

benhoff commented Jul 8, 2019

Added in a specific preprocessing.py file in the auto annotation upload. This pushes the complexity to the user for providing the correct handling for their models.

@benhoff benhoff changed the title [WIP] Notional OpenVino2019 handeling OpenVino2019 handeling Jul 8, 2019
@nmanovic
Copy link
Contributor

nmanovic commented Jul 10, 2019

@benhoff , thanks for your contributions. Your help is important for the project and I need to say that you sent us many great enhancements.

we will review it internally together with my team but in general I don't like the idea to add a preprocessing file just to avoid some problems with OpenVINO versions. There several reasons in my mind why it can be difficult to support in the future:

  • For different versions of OpenVINO the file will be different. Thus users should write more scripts to run pre-annotation. It is already quite complex process and the patch doesn't make it simpler.
  • The same scripts will not work for different instances of CVAT. Thus it will not be possible just share a model with such scripts.

Personally I will vote for a solution to reduce complexity for end users. A couple of simple solutions (but not the best):

  • Rely only on the latest OpenVINO version. Make sure that old scripts work fine with new versions.
  • Fix OpenVINO version and ask users to download only the version or add it into CVAT container.
  • something else

@benhoff
Copy link
Contributor Author

benhoff commented Jul 10, 2019

@nmanovic, understood.

I'll maintain this patch for my team until a more permanent solution is found. Please let me know if your team internally comes to a decision on the way to handle this correctly. If the solution doesn't require too much time, I'll drop a PR against it.

Do you want me to close this PR?

@azhavoro
Copy link
Contributor

azhavoro commented Jul 11, 2019

@benhoff thanks for your contribution! We've discussed the issue internally and on our point of view it's better to handle it inside modelLoader by the way as @alalek mentioned. This approach changes nothing for users and old models should work like before. What do you think?

one more attempt to workaround this
    network = make_network('{}.xml'.format(MODEL_PATH), '{}.bin'.format(MODEL_PATH))
    network_inputs = network.inputs
    print('network.inputs = ' + str(list(network.inputs)))
    print('network.outputs = ' + str(list(network.outputs)))
    info_blob_name = 'image_info'
    input_blob_name = 'image_tensor'
    output_blob_name = next(iter(network.outputs))
    executable_network = plugin.load(network=network)
 
    del network
 
    try:
        assert input_blob_name in network_inputs
        #image_np = np.zeros([3, 600, 600])
        … load image …
        inputs={input_blob_name: image_np[np.newaxis, ...]}
        if info_blob_name in network_inputs:
            info = np.zeros([1, 3])
            info[0, 0] = 600
            info[0, 1] = 600
            info[0, 2] = 1
            inputs[info_blob_name] = info
        prediction = executable_network.infer(inputs)[output_blob_name][0][0]
        print(prediction.shape)
    …

@benhoff
Copy link
Contributor Author

benhoff commented Jul 11, 2019

@azhavoro, re-pushed, take a look?

Copy link
Contributor

@nmanovic nmanovic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@nmanovic nmanovic merged commit 3ae8a72 into cvat-ai:develop Jul 11, 2019
MultifokalHirn added a commit to signatrix/cvat that referenced this pull request Jul 22, 2019
* develop: (112 commits)
  fixed attribute processing in auto_annotation (cvat-ai#577)
  CVAT.js API Tests (cvat-ai#578)
  Fixed exception in attribute annotation mode (cvat-ai#571)
  CVAT.js API methods were implemented (cvat-ai#572)
  Dashboard components basic styles (cvat-ai#574)
  Handle invalid json labelmap file case correctly during create/update DL model stage. (cvat-ai#573)
  Upgrade Numpy to avoid Arbitrary Code Execution. Upgrade Django to avoid MitM (cvat-ai#575)
  Run functional tests for REST API during a build (cvat-ai#506)
  CVAT.js other implemented API methods and bug fixes (cvat-ai#569)
  CVAT.js implemented API methods and bug fixes (cvat-ai#564)
  added in handeling for openvino 2019 (cvat-ai#545)
  added in command line auto annotation runner (cvat-ai#563)
  Fixed PDF extractor syntax error (cvat-ai#565)
  Update README.md
  added in pdf extractor (cvat-ai#557)
  Basic dashboard components (cvat-ai#562)
  Saving of annotations on the server (cvat-ai#561)
  Code was devided by files (cvat-ai#558)
  CVAT.js: Save and delete for shapes/tracks/tags (cvat-ai#555)
  Fixed '=' to '==' for numpy in requirments (cvat-ai#556)
  ...

# Conflicts:
#	.gitignore
@Warday
Copy link

Warday commented Dec 7, 2020

Hi, I have that problem. With that modification. How can I feed the network? Example code using Openvino 2021. I need 2019 or newer because there faster rcnn inception v2 is compatible with newralstik2.

, c, h, w = [1, 3, 1024, 1024]
images = np.ndarray(shape=(n, c, h, w))
images_hw = []
image = cv2.imread(IMAGE_DIR)
ih, iw = image.shape[:-1]
images_hw.append((ih, iw))
if image.shape[:-1] != (h, w):
image = cv2.resize(image, (w, h))
image = image.transpose((2, 0, 1))
images[0] = image
print(processed_img.shape)
result = exec_net.infer(inputs = {input_blob: images})

ERROR could not broadcast input array from shape (3,600,600) into shape (1,3)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants