-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable to load parameters & convert fit_a_line model #9
Conversation
Run
The output yields:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good - pretty good stuff. Just a few questions here and there
# Paddle op name : (ONNX op name, modifier) | ||
'abs': ('Abs', abs_op), | ||
'elementwise_add': ('Add', add_op), | ||
'elementwise_add': add_op, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the reason I originally did this weird mapping to tuples is because we may want to reuse such a map in future for mapping the reverse way. So it will be easier to traverse that. If you think we will probably need to create an entirely way, this makes much more obvious sense. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we can implement the bidirectional conversion in one map. But after I write some code, I feel it would be better to decouple the two conversions very clearly. Because it is a bit hard to make the two sets of operators one-to-one correspondence. For example, ONNX has FC
operator, but Fluid doesn't, in Fluid--> ONNX conversion we would never use FC
op, and in the reverse conversion we need to implement the FC
op with mul
and elementwise_add
op in Fluid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely fair, let's go with this. I thought of the same, but yeah, I agree with your conclusion
for var_name in global_block.vars: | ||
var = global_block.var(var_name) | ||
if var_name not in ['feed', 'fetch'] and var.persistable: | ||
param = fluid.executor.fetch_var(var_name, inference_scope) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh man this is great, I spent so much time writing a custom (py)binding to deserialize params manually, I'll throw that stuff for now. fetch_var
is beautiful. You are the expert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. Actually I am also not familiar with this part, and just find this method by chance :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So let me tell why I have been trying to find overly complex solutions to this. If you see this, it clearly seems like scopes get destroyed after their runs - which is fair, we shouldnt have these thousands, sometimes millions, of variables in memory if they are unused. The only things saved from destruction are persistable global block things (which you seem to be using here). So vars not in global blocks can't be fetched because everything other than global block vars are destroyed.
Now on second thoughts, we can go through the global block only because we don't care about inner blocks, given our use case. But we should be careful because fetch_var
would not have been an option if we had cared about them.
It's also important to be careful because the suggested way to fetch anything out of a program run is to add it to the fetch_list
argument - and not fetch_var
. But to be able to use that, we can't use the default load_inference_model
model function - we'd need to tweak it to pass in and return fetch_list
we are interested in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh you have a much deeper survey than me. It is not easy to realize the existence of fetch_var
because common users usually don't have the necessity to use it.
ops.PADDLE_TO_ONNX[op.type][0], op.input_arg_names, | ||
op.output_arg_names) | ||
if op.type in ops.node_maker: | ||
# TODO(kuke): deal with the corner case that vars in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't unique names generated, no matter how local the scope is? You probably know more here..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, seems that duplicated names are allowed for local variables. I am discussing with others working on framework development, to figure out a proper solution. So if you have any idea, please feel free to speak out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No ideas here, you can leave the comment as is, but I don't think you should worry about a conflict here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I believe that we can figure a way to solve this conflicts. And should always be careful about the potential risk before it is solved totally.
var = global_block.var(var_name) | ||
if var_name not in ['feed', 'fetch'] and var.persistable: | ||
param = fluid.executor.fetch_var(var_name, inference_scope) | ||
param_node = helper.make_node( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So where did you figure out that this was the way params need to be set in an ONNX model? I have been confused and asking around
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also didn't find the way to load parameters until I checked the onnx models in onnx/models, and found that they use Constant
op to store parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting. I asked this question on the ONNX gitter chat and searched a lot through their issues. So (a) your solution is good based on the info you have. (b) it might be a good idea to use what the practice they are recommending. Which is use the initializers
argument on the make_model
function. More here: https://github.com/onnx/onnxmltools/tree/master/onnxmltools/convert/common and https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/coreml/NeuralNetwork/fullyconnected.py#L33. So basically collect the values while traversing the variables, and then when calling make_model
, pass them in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an important information. I think that we can keep the current implementation and move on. In future we can try the initializers
to see if it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, parameter
is a common concept, so we should make this clear.
# raise NameError(op.type) | ||
pass | ||
if op.type not in ['feed', 'fetch']: | ||
raise NotImplementedError("OP[%s] is not supported in " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much smarter error than NameError
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, maybe a better one. You remind me to raise an error here ;-).
Could you also update the README for the new arguments? Oh and when you do, you could probably remove the |
Do you know what the |
Shall we update README and the status after this pull request merged?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there! :)
for var_name in global_block.vars: | ||
var = global_block.var(var_name) | ||
if var_name not in ['feed', 'fetch'] and var.persistable: | ||
param = fluid.executor.fetch_var(var_name, inference_scope) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So let me tell why I have been trying to find overly complex solutions to this. If you see this, it clearly seems like scopes get destroyed after their runs - which is fair, we shouldnt have these thousands, sometimes millions, of variables in memory if they are unused. The only things saved from destruction are persistable global block things (which you seem to be using here). So vars not in global blocks can't be fetched because everything other than global block vars are destroyed.
Now on second thoughts, we can go through the global block only because we don't care about inner blocks, given our use case. But we should be careful because fetch_var
would not have been an option if we had cared about them.
It's also important to be careful because the suggested way to fetch anything out of a program run is to add it to the fetch_list
argument - and not fetch_var
. But to be able to use that, we can't use the default load_inference_model
model function - we'd need to tweak it to pass in and return fetch_list
we are interested in.
var = global_block.var(var_name) | ||
if var_name not in ['feed', 'fetch'] and var.persistable: | ||
param = fluid.executor.fetch_var(var_name, inference_scope) | ||
param_node = helper.make_node( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting. I asked this question on the ONNX gitter chat and searched a lot through their issues. So (a) your solution is good based on the info you have. (b) it might be a good idea to use what the practice they are recommending. Which is use the initializers
argument on the make_model
function. More here: https://github.com/onnx/onnxmltools/tree/master/onnxmltools/convert/common and https://github.com/onnx/onnxmltools/blob/master/onnxmltools/convert/coreml/NeuralNetwork/fullyconnected.py#L33. So basically collect the values while traversing the variables, and then when calling make_model
, pass them in.
ops.PADDLE_TO_ONNX[op.type][0], op.input_arg_names, | ||
op.output_arg_names) | ||
if op.type in ops.node_maker: | ||
# TODO(kuke): deal with the corner case that vars in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No ideas here, you can leave the comment as is, but I don't think you should worry about a conflict here
# Paddle op name : (ONNX op name, modifier) | ||
'abs': ('Abs', abs_op), | ||
'elementwise_add': ('Add', add_op), | ||
'elementwise_add': add_op, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely fair, let's go with this. I thought of the same, but yeah, I agree with your conclusion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just update README and we are good to go! 👍
Resolve #4
Resolve #3
The correctness has been verified in #11.