Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CPU] Add interface to release compiled model internal memory #26262

Merged
merged 42 commits into from
Sep 5, 2024

Conversation

maxnick
Copy link
Contributor

@maxnick maxnick commented Aug 27, 2024

Details:

Port #26390 to master

Tickets:

@github-actions github-actions bot added category: inference OpenVINO Runtime library - Inference category: CPU OpenVINO CPU plugin category: CPP API OpenVINO CPP API bindings labels Aug 27, 2024
@maxnick maxnick changed the title [CPU] Add releasing intermediate tensors via OV interface call [DO NOT REVIEW][CPU] Add releasing intermediate tensors via OV interface call Aug 27, 2024
@maxnick maxnick marked this pull request as ready for review August 27, 2024 17:13
@maxnick maxnick requested review from a team as code owners August 27, 2024 17:13
* @brief Release intermediate memory
*
*/
virtual void release_buffers();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think about a better name. This one doens't really clarify which buffers are really implied.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if several requests are still running and I will call this method?

Copy link
Contributor Author

@maxnick maxnick Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a really good question. This is merely a POC to evaluate the memory footprint in a specific application. So the exact interface level solution is a subject of an arch review.

GraphGuard::Lock graph_lock{graph};
auto ctx = graph_lock._graph.getGraphContext();
ctx->getNetworkMemoryControl()->releaseMemory();
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add releasing the oneDNN caches here?

src/plugins/intel_cpu/src/config.h Outdated Show resolved Hide resolved
dnnl::engine eng;

public:
DnnlScratchPad(const dnnl::engine& eng, int numa_node = -1) : eng(eng) {
mgrPtr = std::make_shared<DnnlMemoryMngr>(make_unique<MemoryMngrWithReuse>(numa_node));
blockPtr = std::make_shared<DnnlMemoryBlock>(make_unique<MemoryBlockWithReuse>(numa_node));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be created by the memory control subsystem to be released on demand also. But it may require shotgun surgery to cover all the existing cases of direct scratchpad using (when the implementation bypasses the getSchrachpadMemory call.

void insertReorder(EdgePtr& edge, bool isOptimized, std::unordered_set<std::string>& uniqueLayerNames);
void insertConvert(EdgePtr& edge);
int GetNumaNodeId() const;
MemoryControl* m_pMemoryControl = nullptr;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's meaningful to change ptr to ref, but to this end a few MemoryControl methods must be modified as well.

src/plugins/intel_cpu/src/memory_control.cpp Outdated Show resolved Hide resolved

private:
MemoryControl::MemoryBlockMap m_blocks;
std::vector<MemorySolver::Box> m_boxes;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be it makes sense to use FIFO buffer?

src/plugins/intel_cpu/src/memory_control.cpp Outdated Show resolved Hide resolved
Comment on lines +549 to +556
// TODO: conceptually fetchRawMemory is a very bad solution
if (mem->getDesc().getPrecision() == element::string) {
return;
}
auto block = mem->getMemoryBlock();
if (mem->isDefined()) {
block->resize(mem->getSize());
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible solutions:

  1. allocate() for the memory interface. The main drawback is that the memory with such an interface don't have to allocate memory in constructor, which brings additional complexity: allocated() check and subsequent mandatory allocate() call, which obviously is extra work for the user.
  2. lazy evaluation inside getData(). There are two problems, the first one is that in this case the getData call wouldn't be constant anymore thereby wouldn't be thread safe. Another problem is that some implementations still use dnnl memory object instead of the plugin entities, thus they wouldn't call getData and wouldn't trigger memory reallocation.
    Probably we need another solution.

src/plugins/intel_cpu/src/utils/general_utils.h Outdated Show resolved Hide resolved
@maxnick maxnick requested review from a team as code owners September 4, 2024 16:40
@github-actions github-actions bot added the category: IE Tests OpenVINO Test: plugins and common label Sep 4, 2024
@maxnick maxnick changed the title [DO NOT REVIEW][CPU] Add releasing intermediate tensors via OV interface call [CPU] Add interface to release compiled model internal memory Sep 4, 2024
@maxnick maxnick removed the WIP work in progress label Sep 4, 2024
@maxnick maxnick added this to the 2024.4 milestone Sep 4, 2024
@dmitry-gorokhov dmitry-gorokhov added this pull request to the merge queue Sep 5, 2024
Merged via the queue into openvinotoolkit:master with commit ff5a463 Sep 5, 2024
150 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: CPP API OpenVINO CPP API bindings category: CPU OpenVINO CPU plugin category: IE Tests OpenVINO Test: plugins and common category: inference OpenVINO Runtime library - Inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants