Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #18388 by manually del runtime_model in python code #22874

Closed
wants to merge 1 commit into from

Conversation

FredBill1
Copy link
Contributor

@FredBill1 FredBill1 commented Feb 15, 2024

Details:

The detailed cause of the issue is described in #18388 (comment).

This PR is a simple alternative to #22872.

Tickets:

@@ -27,6 +29,9 @@ def test_get_runtime_model(device):
compiled_model = generate_relu_compiled_model(device)
runtime_model = compiled_model.get_runtime_model()
assert isinstance(runtime_model, Model)
# make sure runtime_model is destroyed before the dynamic library is unloaded, as described in issue #18388
del runtime_model
gc.collect()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have the following code:

std::shared_ptr<ov::Model> FrontEnd::create_copy(const std::shared_ptr<ov::Model>& ov_model,
const std::shared_ptr<void>& shared_object) {
// Recreate ov::Model using main runtime, not FrontEnd's one
auto copy = std::make_shared<Model>(ov_model->get_results(),
ov_model->get_sinks(),
ov_model->get_parameters(),
ov_model->get_variables(),
ov_model->get_friendly_name());
copy->m_shared_object = shared_object;
copy->get_rt_info() = ov_model->get_rt_info();
return copy;
}

which adds a reference to frontend library to ov::Model when it's read by frontends.

In current case, runtime_model is ov::Model created by plugins. Do you think we can fix the issue similarly to frontends case somewhere here?

std::shared_ptr<const Model> CompiledModel::get_runtime_model() const {
OV_COMPILED_MODEL_CALL_STATEMENT(return _impl->get_runtime_model());
}

For this we need to add a CompiledModel as a friend similarly to:
namespace frontend {
class FrontEnd;
}
class ModelAccessor;
/**
* @brief A user-defined model
* @ingroup ov_model_cpp_api
*/
class OPENVINO_API Model : public std::enable_shared_from_this<Model> {
friend class frontend::FrontEnd;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks for the review. I hadn't thought of solving this problem by recreating ov::Model in the main runtime. I've created a new PR #22900. Could you please take a look at it?

@ilya-lavrenov
Copy link
Contributor

Closes in favor of #22900

github-merge-queue bot pushed a commit that referenced this pull request Feb 23, 2024
…the shared objects to the recreated object (#22900)

### Details:

The detailed cause of the issue is described in
#18388 (comment).

Details of this PR are in
#22874 (comment)
@ilya-lavrenov.

I have tested the changes against the following code and it worked as
expected:

```cpp
#include <iostream>
#include <memory>

#include "openvino/core/model.hpp"
#include "openvino/opsets/opset8.hpp"
#include "openvino/runtime/core.hpp"

std::shared_ptr<ov::Model> create_simple_model() {
    auto data = std::make_shared<ov::opset8::Parameter>(ov::element::f32, ov::Shape{3, 1, 2});
    auto mul_constant = ov::opset8::Constant::create(ov::element::f32, ov::Shape{1}, {1.5});
    auto mul = std::make_shared<ov::opset8::Multiply>(data, mul_constant);
    auto res = std::make_shared<ov::opset8::Result>(mul);
    return std::make_shared<ov::Model>(ov::ResultVector{res}, ov::ParameterVector{data});
}

int main(int argc, char* argv[]) {
    const auto devices = {"CPU", "GPU", "AUTO", "MULTI:GPU,CPU", "BATCH:GPU", "HETERO:CPU,GPU"};
    for (auto&& device : devices) {
        std::cout << "Device: " << device << std::endl;
        auto model = create_simple_model();
        auto core = std::make_shared<ov::Core>();
        auto compiled_model = std::make_shared<ov::CompiledModel>(core->compile_model(model, device));
        auto runtime_model = compiled_model->get_runtime_model();
        std::cout << compiled_model << ' ' << runtime_model << std::endl;
        core.reset();
        compiled_model.reset();  // FreeLibrary happens here
        runtime_model.reset();   // segmentation fault happens here
        std::cout << compiled_model << ' ' << runtime_model << std::endl;
    }
}
```

```
Device: CPU
000002327F6F63E0 000002327EF0A950
0000000000000000 0000000000000000
Device: GPU
0000023213B85510 000002327F6F39B0
0000000000000000 0000000000000000
Device: AUTO
000002327F234920 000002327F14B720
0000000000000000 0000000000000000
Device: MULTI:GPU,CPU
0000023212DFDEF0 000002327FB153B0
0000000000000000 0000000000000000
Device: BATCH:GPU
75 warnings generated.
0000023212DE6C60 000002327F14D340
0000000000000000 0000000000000000
Device: HETERO:CPU,GPU
0000023212DAFC80 000002327F14BA40
0000000000000000 0000000000000000
```

### Tickets:

 - Closes #18388

---------

Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Python API OpenVINO Python bindings ExternalPR External contributor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Segmentation fault when running test_get_runtime_model test
2 participants