-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems caused by GetInputNameAllocated after upgrading from 1.12 to 1.13 #14157
Comments
In this code, a temporary for (int i = 0; i < inputNodesNum; i++) {
auto temp_input_name = session->GetInputNameAllocated(i, allocator);
inputNodeNames.push_back(temp_input_name.get());
} You could store the result of |
Yes The code is also leaking floating point input buffers since CreateTensor does not take ownership of the input buffers.
|
I've tried string in_name="images";
string out_name0="output0";
string out_name1="output1";
inputNodeNames.push_back(in_name.c_str());
outputNodeNames.push_back(out_name0.c_str());
outputNodeNames.push_back(out_name1.c_str()); I can get it correctly in this way now, but once a model is changed, the name may change, which will cause some other problems. |
Since the amount of data I need to infer is large, I don't want to get the input and output names again every time before running. Is there any way to achieve it?
Because it is analog data, I forget to delete the temp pointer of each loop. Is it OK to modify it like this? ...
auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtAllocatorType::OrtDeviceAllocator, OrtMemType::OrtMemTypeCPUOutput);
size_t input_tensor_length = 640 * 640 * 3;
cv::Size input_size(640, 640);
for (int i = 0; i < 10; i++) {
cv::Mat img = cv::Mat::zeros(input_size, CV_8UC3);//Simulate reading a new picture from the disk; img=imread("new_img_path");
Mat blob;
cv::dnn::blobFromImage(img, blob, 1 / 255.0, input_size, Scalar(0, 0, 0), true, false);
std::vector<Ort::Value> input_tensors;
std::vector<Ort::Value> output_tensors;
std::cout << "################### befor run:##############" << endl;
std::cout << "input node name:" << inputNodeNames[0] << endl;
std::cout << "output0 node name:" << outputNodeNames[0] << endl;
std::cout << "output1 node name:" << outputNodeNames[1] << endl;
input_tensors.push_back(Ort::Value::CreateTensor<float>(
memoryInfo, (float*)blob.data, input_tensor_length, inputTensorShape.data(),
inputTensorShape.size()));
output_tensors = session->Run(Ort::RunOptions{ nullptr },
inputNodeNames.data(),
input_tensors.data(),
inputNodeNames.size(),
outputNodeNames.data(),
outputNodeNames.size());
std::cout << "################### after run:##############" << endl;
std::cout << "input node name:" << inputNodeNames[0] << endl;
std::cout << "output0 node name:" << outputNodeNames[0] << endl;
std::cout << "output1 node name:" << outputNodeNames[1] << endl;
}
|
To clarify, in this approach there is an additional vector. std::vector<const char*> inputNodeNames; //
std::vector<AllocatedStringPtr> inputNodeNameAllocatedStrings; // <-- newly added
...
auto inputNodesNum = session->GetInputCount();
for (int i = 0; i < inputNodesNum; i++) {
auto input_name = session->GetInputNameAllocated(i, allocator);
inputNodeNameAllocatedStrings.push_back(std::move(input_name));
inputNodeNames.push_back(inputNodeNameAllocatedStrings.back().get());
} So the memory pointed to by
You can do this after loading the model before the call(s) to |
@edgchen1 Thanks for your help, it works. |
I tried your method and still reported the same error |
@chengdashia
|
I'm sorry for bothering you again, but I tried using the code you provided and I'm still getting the same error. I was wondering which version of onnxruntime you are using? |
@chengdashia input_names and output_names should not be used as a temporary variable in "for",which is you are doing. |
Yes,As you said, I treat them as member variables. |
Yeah, I know about this and there are other Yolov5 repositories available, so I did not release the ORT of yolov5, but instead replaced it with yolov5-seg. |
Sorry, I didn't quite understand what you meant. You mean, let me look at your warehouse to find the answer? I trained a defect detection model for PCB using YOLOv5. |
You misunderstood. I mean, Yolov5 for onnxruntime was not released because many people have already released it. |
I originally trained with yolov5-7.0, using the pcb defect data set published by Peking University. Then I started using OpenCV to deploy in c++, but the frame rate was very slow, and that's when I came to onnxruntime. I have been using the hpc203 warehouse code, his c++ OpenCV deployment is no problem, python onnxruntime deployment is no problem. However, when using his c++ onnxruntime for deployment, the present problem arose. I suspected it was a version problem, then I tried 1.7,1.13.1,1.14,1.15, 1.16,16.1. It never worked out. Finally I found this place. I hope you can help me. Thank you very much. |
@chengdashia |
Describe the issue
When upgrading from 1.12.x to 1.13.x,
GetInputName
andGetOutputName
need to be replaced withGetInputNameAllocated
andGetOutputNameAllocated
, I encountered a very strange bug here.onnx mode export from yolov5-seg.pt:
https://drive.google.com/file/d/1tV2moQxNfLzNf6yXm5Zev5CVj2o9zuaz/view?usp=share_link
Run the following code, and everything is OK for TestONNX(), but when running TestONNX2(), the input and output nodes names become strange after
session->Run()
They just have different ways of obtaining node names. One is for loop, and the other is useless.
An error will be reported even if nothing is modified beyond the bracket of the node name:
So, if I have multiple inputs and outputs and cannot use the for loop, how can I solve this problem?
To reproduce
Urgency
No response
Platform
Windows
OS Version
WIN10 22H2
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.13.1 from https://github.com/microsoft/onnxruntime/releases/download/v1.13.1/onnxruntime-win-x64-gpu-1.13.1.zip
ONNX Runtime API
C++
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 11.4
The text was updated successfully, but these errors were encountered: