-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix einsum #174
fix einsum #174
Conversation
355f25b
to
dbe6329
Compare
is_input_0_constant = isinstance(input_0, tf.Tensor) | ||
is_input_1_constant = isinstance(input_1, tf.Tensor) | ||
if is_input_0_constant and is_input_1_constant: | ||
layers[node_name] = tf.einsum(equation, *[input_0, input_1], name=keras_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The name param in TFOP does not propagate to the actual model due to a bug in keras.
- We use a different name than
keras_name
for clarity and debugging now, I suggest using "{params['cleaned_name']}_einsum
" to be consistent
We've created a wrapper that makes sure the name propagates (under tfops_funcs.py)
I suggest wrapping tf.einsum with a named_tfop (tf_einsum), importing it, and using it as:
tf_einsum(whatever-you-need, tf_name="{params['cleaned_name']}_einsum")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not a problem because in this case the output is constant and its not part of the model
onnx2kerastl/operation_layers.py
Outdated
equation = params['equation'].decode('utf-8') | ||
|
||
is_input_0_constant = isinstance(input_0, tf.Tensor) | ||
is_input_1_constant = isinstance(input_1, tf.Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tip3x
For both tests above, this could be a numpy
as well, depending on the specifics of the model. Does it make sense to test for that as well?
If you can, please also add a test to the CI so we won't have regressions on this model in future fixes. |
No description provided.